Python – Deep Focus https://fazals.ddns.net/ Embrace it! Mon, 17 Aug 2020 07:21:30 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.1 https://fazals.ddns.net/wp-content/uploads/2020/08/react-native-300-150x150.png Python – Deep Focus https://fazals.ddns.net/ 32 32 173186184 Latest Spectrum Analyser using Python | Part-2 https://fazals.ddns.net/spectrum-analyser-part-2/ Mon, 11 May 2020 01:31:12 +0000 https://fazals.ddns.net/?p=1002 Latest Spectrum Analyser using Python | Part-2 Read More »

]]>

Since we have already built spectrum analyser in part 1, where we use the PyAudio Python library to open the microphone and bring in raw binary data into the code. Convert that binary data into 16-bit integers and displayed them on a Plot using the MatPlotLib’s PyPlot.
If you have not built it in the part one, click here and build the part 1 as this part will be a continuation to that.

The algorithm to calculate the Spectrum

We will be using the FFT or the Fast Fourier Transform to calculate spectrum for the spectrum analyser. Now, FFT is an algorithm that computes the Discrete Fourier Transform. In short the DFT of a sequence of signal to represent it in frequency domain.
There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory. The one we will be using is the python fft algorithm from the library Numpy.

Package 📦 to import for fft in Python?

As far as the the part of the code which calculates the FFT of a sequence, there are many libraries that offer simple classes to do so. Since we already imported Numpy. And since it already provides the class or method to calculate the FFT. We will not be importing any new library.

Also, take a note that there are other libraries available that compute the FFT of a sequence of signal.

The code for audio spectrum analyser / audio visualizer

Since we have already coded in the first part, we will only be making a few changes to the code, a few additions and then we will be good to go. If you don’t have the code get it from here.

Now in the part where we initiallise the plot objects, that is to say the figure and the axises, make the following changes.

Instead of

fig, ax = plt.subplots()

make it

fig, (ax,ax1) = plt.subplots(2)

and add the following lines, to the code. The first line creates a one dimentional array containing the values from 0 uptill 44100 with number of parts equal to the size of CHUNK. And the next line sets the X axis as semilog since the frequency representation is always done on semilog plots.

x_fft = np.linspace(0, RATE, CHUNK)
line_fft, = ax1.semilogx(x_fft, np.random.rand(CHUNK), 'b')

Since the output of the FFT computations are going to range from 0 to 1. We change the Y limits of the frequency plot. And also, since the computation produces a mirror of the spectrum after half the sampling rate. We dont need the part after half the sampling rate, that is after 22050. And hence we change the X limit also.

ax1.set_xlim(20,RATE/2)
ax1.set_ylim(0,1)

And then in the infinite loop, add the following lines to compute the FFT and plot it.
Since the computed FFT contains both the real and the imaginary part. We take the absolute value of the returned FFT spectrum, multiply it by 2, and then divide it by 33000 times CHUNK to produce a plot with Y values ranging from 0 to 1.

line_fft.set_ydata(np.abs(np.fft.fft(dataInt))*2/(33000*CHUNK))

The Complete code

import numpy as np #importing Numpy with an alias np
import pyaudio as pa 
import struct 
import matplotlib.pyplot as plt 

CHUNK = 1024 * 1
FORMAT = pa.paInt16
CHANNELS = 1
RATE = 44100 # in Hz

p = pa.PyAudio()

stream = p.open(
    format = FORMAT,
    channels = CHANNELS,
    rate = RATE,
    input=True,
    output=True,
    frames_per_buffer=CHUNK
)



fig, (ax,ax1) = plt.subplots(2)
x_fft = np.linspace(0, RATE, CHUNK)
x = np.arange(0,2*CHUNK,2)
line, = ax.plot(x, np.random.rand(CHUNK),'r')
line_fft, = ax1.semilogx(x_fft, np.random.rand(CHUNK), 'b')
ax.set_ylim(-32000,32000)
ax.ser_xlim = (0,CHUNK)
ax1.set_xlim(20,RATE/2)
ax1.set_ylim(0,1)
fig.show()

while 1:
    data = stream.read(CHUNK)
    dataInt = struct.unpack(str(CHUNK) + 'h', data)
    line.set_ydata(dataInt)
    line_fft.set_ydata(np.abs(np.fft.fft(dataInt))*2/(11000*CHUNK))
    fig.canvas.draw()
    fig.canvas.flush_events()

Then, when you run the program, you should be able to see the time domain as well as the frequency domain representation of the sequence from the microphone.

Spectrum Analyser

You shoud be able to get plots similar to one given above. Use this website to generate pure sine/square/triangular waves with any frequency you want to test out this project.

Liked the project? Drop your reviews in the comment section, and share it among your fellow mates. 👍🏻
Check out my other posts.
Follow me on Social Media 👇👇👇

]]>
1002
Realtime Spectrum Analyser using Python | Part-1 https://fazals.ddns.net/spectrum-analyser-part-1/ Sat, 09 May 2020 08:11:12 +0000 https://fazals.ddns.net/?p=865 Realtime Spectrum Analyser using Python | Part-1 Read More »

]]>

A Spectrum Analyser measures the amplitude or the magnitude of the input signal, with respect to the frequency. Its mainly used to analyse the amplitude of signals at different frequencies.

Python is a high level interpreted, multipurpose programming language. If you ever wanted to build your own audio spectrum analyser, that works out of the box with your microphone in Windows, linux or Mac, you are at the right place.

A realtime spectrum analyser doesn’t have any build time or lag. The analyser is able to sample the incoming spectrum in the time domain and convert the sampled information into frequency domain.

The algorithm that we will be using to convert the time domain signal to frequency domain is called FFT. Briefly, fast fourier transform. The Numpy’s fft library for Python is used to generate the fft for the python spectrum analyser or the python audio audio visualizer.

Libraries Required for python spectrum analyser / audio visualizer

For the first part of the video, where we sample the audio from the microphone and display it in the time domain. We will be needing the following Python Libraries.

  1. Numpy
    Numpy is a Python Library that adds support for large, multi-dimentional arrays and matrices. It has high level mathematical functions to operate on the data.
  2. PyAudio
    PyAudio is another Python Library that can be easily used to play and record audio with Python on a variety of platforms.
  3. Struct
    Struct is another Python module that performs conversion between Python values and C structures. It basically performes conversion of data from binaries to their equivalent python data types.
  4. MatPlotLib
    Matplotlib is a library for the Python and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+.

Installing Libraries

Installing the required libraries is very simple.
To install Numpy, run the following command in CLI

pip install numpy

Installing MatPlotLib and PyAudio is done in the same way.

pip install PyAudio
pip install matplotlib

If you face any problem installing PyAudio, download the .whl file based on your Python version and system architecture from here, and then run the command

pip install /path_to_whl_file/filename.whl

The Struct module comes pre-installed with Python.

The code

As always in Python we start off with importing the required libraries.

Skip the explination and jump to the complete code.

import numpy as np
import pyaudio as pa 
import struct 
import matplotlib.pyplot as plt 

Then, we initialize some variables.
CHUNK is the number of samples that will be processed and displayed at any instance of time.
FORMAT is the data type that the PyAudio library will output.
CHANNELS is the number of channels our microphone has.
RATE is the sampling rate. We will chose the most common one, that is 44100Hz or 44.1KHz.

CHUNK = 1024 * 2
FORMAT = pa.paInt16
CHANNELS = 1
RATE = 44100 # in Hz

Next we create a PyAudio object from class PyAudio and store it in a variable p.

p = pa.PyAudio()

Then, we use the open method on the object p and pass on the variables we initialiszed as parameters.

stream = p.open(
    format = FORMAT,
    channels = CHANNELS,
    rate = RATE,
    input=True,
    output=True,
    frames_per_buffer=CHUNK
)

Now, we will initiallise the plot with random values and set the limits for each of the two axises.

fig,ax = plt.subplots()
x = np.arange(0,2*CHUNK,2)
line, = ax.plot(x, np.random.rand(CHUNK),'r')
ax.set_ylim(-32770,32770)
ax.ser_xlim = (0,CHUNK)
fig.show()

Finally, we create an infinite loop and read the data from the microphone, convert the data into 16-bit numbers and further plot it using the matplotlib.pyplot function.

while 1:
    data = stream.read(CHUNK)
    dataInt = struct.unpack(str(CHUNK) + 'h', data)
    line.set_ydata(dataInt)
    fig.canvas.draw()
    fig.canvas.flush_events()

Now when you run the code. You should be able to see the program running flawlessly. If you encounter any error, check the default input device in your sound settings.

The complete code

import numpy as np 
import pyaudio as pa 
import struct 
import matplotlib.pyplot as plt 

CHUNK = 1024 * 2
FORMAT = pa.paInt16
CHANNELS = 1
RATE = 44100 # in Hz

p = pa.PyAudio()

stream = p.open(
    format = FORMAT,
    channels = CHANNELS,
    rate = RATE,
    input=True,
    output=True,
    frames_per_buffer=CHUNK
)



fig,ax = plt.subplots()
x = np.arange(0,2*CHUNK,2)
line, = ax.plot(x, np.random.rand(CHUNK),'r')
ax.set_ylim(-60000,60000)
ax.ser_xlim = (0,CHUNK)
fig.show()

while 1:
    data = stream.read(CHUNK)
    dataInt = struct.unpack(str(CHUNK) + 'h', data)
    line.set_ydata(dataInt)
    fig.canvas.draw()
    fig.canvas.flush_events()

This completes the part one. Feel free to comment and discuss.
Want to learn how to create and setup your own Blynk server to get started with IoT ?
Or want to know how I created this website? Do care to give a thumbs up and share.😄

]]>
865