OpenAI, LLMs, WebRTC, voice bots and Programmable Video
Learn about WebRTC LLM and its applications. Discover how this technology can improve real-time communication using conversational AI.
Read MoreInterviews bring with them interesting technical questions and different world views. Here's one about memory management.
When I started looking at places to go for my new gig, I went to a few interviews at startup companies. It was an interesting experience – after years as a developer and a marketer, I had to sit in front of other developers and sell myself.
Now you need to understand, I come from a background of developing SDKs for other developers. In my line of business, you have no clue who will be using your code and for what. It can be an embedded client with puny memory and CPU or it can be a huge servers farm running a five 9’s telephony service. This usually meant doing everything manually and preparing for the worst.
In one of my interviews, I was asked to “develop” a system that saves and loads graphical elements in an image: things like rectangles, circles, text areas, etc.
So I did. And with my usual thinking, decided to make my structures small. Very. And then deal with memory allocation on my own, or rather in a way that made it tight – as little malloc() or new calls as possible. This didn’t work out well with the guys in the room…
It went religious and from then on things only deteriorated.
My own statement was that you can do things faster if you do them manually since you are the king of your castle: you know your application and its behavior (hopefully), so you can design memory allocation to fit your needs. The startup, on the other hand, relied on Linux and Intel to do dynamic memory allocations in the best of ways for them – don’t fix things that ain’t broken.
With any good technical debate, this one left me with some uneasy feeling, and the best way to thwart that is by going to the Google oracle, or in this case – stack overflow. There’s a similar question there with answers that go both ways. No help there…
I guess it really is a matter of priorities. A university professor which I highly value once told me that you need to first build the product and only then start optimizing. I believed in architecting the product with optimizations in mind from day one. When your application gets big enough that you need to scale it – you will need to optimize memory allocations. Either do them by hand instead of dynamically or go find yourself a better memory allocation mechanism (or garbage collection mechanism – depending on the language) than the one provided by default by the operating system.
The things to ask yourself are:
Learn about WebRTC LLM and its applications. Discover how this technology can improve real-time communication using conversational AI.
Read MoreGet your copy of my ebook on the top 7 video quality metrics and KPIs in WebRTC (below).
Read More