We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Zaiste Programming · 1.2K views · 16 likes
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- This video provides deep technical insight into low-level systems programming, specifically how Linux kernel features like io_uring can be used to bypass traditional networking bottlenecks.
Be Aware
Cautionary elements
- The 'revelation framing'—positioning the guest as a researcher who knows more because he left the 'broken' academic system—can lead viewers to accept his technical benchmarks without standard peer-reviewed scrutiny.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
h Hello another day a pretty exciting day today because we are meeting with Ash yeah uh the creator of a of many programming tools and a company and a company called Unum we'll be doing an interview with him uh super knowledgeable person uh very smart very low level very low level C++ C it will be something completely different to to what you saw previously on our Channel I guess yeah I'm hoping for the interview to be super specific specific super low level we met Ash at the open source AI day in beray Sunday uh his talk was amazing one of the best talks uh I watched I've seen one of the best of the two talks you've seen I've seen I've seen a couple of talks but this dog was one of the one of the best um and uh I'm looking forward to it so too stay tuned and uh see see you in a bit welcome Ash it's a pleasure to have you here today uh thanks for agreeing for this interview and to share your perspective on AI in general so we met like a couple of days ago on Sunday in beray and there was this event called open source AI I was watching your talk and I was kind of Blown Away uh it was in my opinion one of the best talk I've only seen a couple but your was pretty pretty great one of the top two talks of the two talks you watch I watch that's the best compliment I can get okay I'll take it so there was a lot of interesting thing a lot of fascinating uh fascinating projects you are working on but before we dive into that uh maybe you could tell us a little bit about yourself about your journey about your story how did you start with programming and then how you transitioned to AI if if maybe you started from a from the from the get go but uh so I started programming uh maybe close to 20 years ago and I think it's not a good idea to cover all 20 years uh in short I did a little bit of like just basic educational programming and just like learning the fundamentals of computer science uh then um in my middle school years I got to do some web development and freelancing and like learning how to work on your own being an India developer uh one thing after another I got myself into the apple ecoystem with the 2009 aluminum body Macbook Pro 13 in the first yeah I remember it as if it was today my first expensive Tech purchase uh and then I started developing iOS apps yeah I did it for a few years then I left programming I was at the very peak of the Dunning Krueger effect when you think you know everything and I tried to explore something else outside of uh just computer science so I spent a few years doing physics and studying it and then I left to start this project that I'm still building called Unum MH uh which was eight and a half years ago when I realized that I need to devote my life to building towards AGI and helping the world with AI infra and I decided to leave my passion which is science in favor of another passion which is Computing uh so here I am interesting on your website you're saying that you're a CS researcher without Publications which is pretty funny yeah it well actually I I tend to think about this a lot how the educational system is built as some of my first most successful applications were in education the period when I spent in the university and in Academia was one of the happiest periods of my life and it was very encouraging and fulfilling and uh intellectually stimulating but in reality I started reading more papers when I left Academia so in the University I would at best read like one or two papers a week and like properly understand it and then when I started Ed this project uh for a couple of years I would skim through maybe over a thousand papers a year so it's like on average three a day uh and you do it for a few years and you attend several conferences like every month or so and one thing after another you kind of start feeling the tag you don't even like have to read every paper you just kind of know what it is about interesting it's kind of similar for me but not at extreme I was just but the same feeling that during the studies I haven't been like into paper reading papers but after I just started to searching certain informations like a specific topic and how to solve a problem rearching very deep yeah and the structure of Academia kind of stimulates people towards publishing a lot I'm not sure that's the incentive I like and I don't uh want to kind of be part of it so the way I think about it for myself I really enjoy reading papers but if I were to write I would probably limit my career to just a couple of papers and I would try to make them extremely short but fundamental so that the ideas would apply outside of the original intended application and be broader that's the end go absolutely but you've mentioned that the company yeah what it's called umum is this the right pronunciation umum so Unum I'm not sure what's the right way to pronounce because it's a Latin word and not many people like really speak Latin and the language is shaped by speakers not by old textbooks uh but yeah I assume Unum is the way okay and you said you started working on it like 8 years ago and it was related to AI yeah AI 8 years ago it sounds almost like like a science fiction story uh you're true there like we were not taken seriously for a very long time um and you see all kinds of friction from peers from other Founders uh from people in Academia uh the way I thought about that is um AI in midterm would require a lot of data processing and uh it just so happened that I was very deep into computer science both the algorithmic side of things but I also liked uh writing high performance kernels and when you do science you occasionally have to implement like computational kernels simulations in Cuda sometimes kernels in Fortran that kind of stuff we talked briefly about that uh so I thought I can put my skills to a good use and optimize all the infrastructure that people are going need in the next few years to build smarter more intelligent applications so I kind of went from this application layer where I developed a lot of the iOS ecosystem related things um down a bit finding the shared common denominator between all those dozens of apps that I was implementing finding maybe like the four or five core libraries that the world needs really well implemented so that the next wave of applications can be smarter more scalable intelligent you pick your I was kind of blown away when I was watching your talk that you are so focused in performance uh so Unum consists of several let's say modules right and maybe we could go each by one like uh discuss each of those modules separately so there is you search you form you call and you store right U and the idea is to combine like to provide some layer of abstraction can I can I use this word uh to do certain operations uh but the end goal is to as you say make uh searching uh for AI more performance is that a good way of uh that's one angle so when we look at the Modern data stack and a lot of the existing Cloud companies they really love building this monolith Vision like this is our product that this is our platform um use it or don't you can pick any other one gotta and um the way I generally write software I try to find um optimal ways to cut it into separate projects it's very hard for me like to guarantee at any given point in time that every project is state-of-the-art in its own way but if you're passionate enough you can just like take the parts that you think are good enough and replace the parts that you don't like with other components maybe not the ones that I wrote maybe Alternatives and have a better option so I think one of the beautiful things about software it can be very modular one of the beautiful things about open source that encourages Innovation and people kind of learning from each other foring each other's work uh and I guess one of my biggest mistakes as a programmer was um I learned a lot from open source and uh I waited too long to start contributing back and I guess many people have the same opinion now that I'm doing it it's very fulfilling and every day we see people coming in from different parts of the go world world and sharing their feedback occasionally bringing in great ideas sometimes sharing their patches to This Global Vision that consists of separate smaller libraries most of them would never use like all four or maybe the other five to 10 that we have internally and haven't polished it to a mature stage yet uh sometimes they would just use you search that you've mentioned it's probably one of the most popular ones at this point and I yes we can unpack every one of them we should do that yeah but what what you mention the modularity it resonates with me because of the Unix approach yeah I think exactly I love unix's philosophy let's say that stacking things or like small things that do one thing uh pretty well uh I wanted to ask you because during your talk you you you were talking about like taking this fast API application like a was like a rest app API or something like that and you you observed that it's relatively slow right and then you just added one modu you created call right and you made it like 70 times faster or something like that the RPC later right yeah yeah yes so that's actually an interesting place uh to pause for second so people think about computers as things that compute like H the name so number crunching performing like all kinds of numerical operations in those places efficiency is very important important but you don't always feel the cost of obstructions because like if you're in a compiled language your code still translates from the higher level language into assembly and then the CPU just executes your assembly in IO intensive applications as opposed to computer intensive applications the cost of obstructions is much easier to trace so we kind of deal with those like multi-layer systems uh that accumulate complexity and reduce throughput at every layer so when we go through a computer science course we have this networking diagram shown to us actually never been to a computer science course but I assume that's what people do uh they have like maybe seven layers of the TCP IP stack and then every one of them has uh constraints on the length of the packet on its structure and every layer performs its operations of unpacking parsing decrypting um so when you just look at this uh you realize oh wow there are so many ways you can mess this up there's so many ways one bad design decision here in one specific operating system kernel will end up killing all of my applications performance and then after that I will have to start thinking about horizontal scaling so I suddenly cannot sustain my users on one machine I need like 10 machines and I need a load balancer in front of them I need a proxy I need a separate layer of like uh certificate clarification or something like that and we started producing and multiplying complexity over the last 20 years of uh Computing ecosystem without ever going back and asking ourselves like do we really need all those seven layers of the tcpip stack or would it be wiser to let's say uh clamp all of it into like a quick c q UI K like HTTP 3 standard like thing in the case of UK call I designed a remote procedure call library that is not going as far so I didn't even have to implement quick or design it all I did I took a recent interface from the Linux kernel called auring which allows me to avoid system calls and Implement asynchrony more efficiently and then instead of separately implementing a python server layer uh and a python application I kind of implemented all of this as a library that has a python SDK I removed maybe like four or five different layers of abstraction and yes 70x throughput Improvement even on a single CPU core is definitely possible it sounds like a low hanging fruit in a way right I'm not sure when you describe it it does sound like it is once you know what you're doing and as a community I think we should find ways to encourage more of this so uh occasionally when I do this kind of stuff the original implementation may take only two days m and um there's plenty of Engineers who might be interested in this but they are not always like encouraged to try of course the first time you do it it may not take two days so the example that I generally refer to would be either database engines or Matrix multiplications because I wrote a lot of those so the the last database engine I wrote was my eighth database engine and every sing one of them takes less and less time to develop so the first time it may take me a few months then it's a few weeks and then every duration is less than a week because you already have this context right you already know how the path from the exactly beginning to the end and then you just add a new information on top of that the same was like with web development and application development a decade ago I remember that when I was developing some of my first apps I well of course I would do dumb mistakes you know like you switch one from one program language to another and let's say even indexing sequential data structures can be from zero or one yes there are languages that do it from one uh and they're still in use today and I would make those stupid mistakes and every little app would take me months to develop but once you get the sense of it every following experiment is lighter and easier pretty interesting um I'm wondering because we are another product of yours we are huge fans of sqlite okay and we love using sqlite we we are fans of it before it became popular we were fans of it before uh some people suggested it could be used on the edge and easily replicated so what what's your take on SQL I think uh okay so I think I'll be quoting some people this time um the quote that comes up to mind now is I think beest trr the author of C++ said there's languages that people like and the languages that people use uh and I maybe like to certain extent sqli uh fits kind of this description even though of course it's not a language so I personally have very strong opinions on a lot of computing related topics especially storage and I believe that storage today can be done much more efficiently than any single database does including SQ light of course but it would be foolish not to say that SQ light to the St the test of time better than practically almost any piece of software written SQ light works like a charm on Macintosh on Linux you can make it work on Windows even though compiling it as a library is not always easy you don't need a server right yeah you don't need a server and that's actually a very important distinction like even when you talk to people in the bay first thing they ask is like how like do you scale horizontally or like do you have a web server and then like someone would come to me and says oh we have a 100x more scalable solution and you ask them how so oh they say oh we just take a thousand servers guys this is not really scalable throwing a disproportionate number of resources that a problem is not a solution um so networking is exceptionally expensive you have to package the data you have to serialize it you have to send it over the network then you have to perform all of those operations in reverse order on the opposite end and only then you can continue working with data just not having to pay pay those things those like penalties means that sqi by itself is many times better in a lot of use cases than pogress or any other super well-funded database company out there I totally agree uh this is a big big subject but maybe we could also like switch to gears a little bit I would like to talk more about usearch maybe uh and because you now ai is like a hype might Maybe it's like a pejorative word but it's it's pretty popular right and there's a lot of companies creating that Vector databases for example or vector searches or all that stuff and I was wondering how usar fits into that and what's your ambition what's your goal with that how you how you see the competition versus what you're building and I think that's the perfect segue because uh quite recently uh due to community requests I've added SQ light integration for us search I wouldn't call it a full-fledged integration um but before that what is US search US search is a vector search engine uh essentially meaning that you can have like a lot of vector entries like high dimensional arrays of numbers uh and to search for them efficiently you need some indexing data structure so like we search through sets of strings or integers using binary search trees or hash tables hash Maps hash sets whatever similarly to search through a large arrays of vectors and perform approximate nearest nebor search uh you would need this kind of data structure this is again not my first Vector search engine this topic is very close to my heart so much so that when I was building up our teams uh implementing or optimizing an existing Vector search engine was one of my interview questions and myself I've written 15 of them more or less that's pretty cool uh so there are different approaches to implementing nearest neighbor search structures uh the three commonly known ones would be KD trees which stands for K dimensional trees uh locality sensitive hashing or lsh for short and the last well-known family would be based on proximity graphs so a graph is like a set of vertices and edges connecting them so like a network and proximity graph would be networks of closed vectors that you link with each other and then you Traverse this network searching for the best matches for your canidate query so in the family of proximity graphs there's like two families two kinds of structures that are well known nsw the um and hnsw theal navigable small bolt graphs uh usearch implements the letter mhm uh it does so in C++ 11 to be very broadly compatible with other systems and thanks to this it's like a practically the core of the index is one header file um it has bindings to I assume maybe like 10 programming languages including c99 being ABI stable uh well implementation is C++ 11 it has bindings to Python 3 JavaScript and typescript rust goang Objective C Swift C uh um it has uh a few other Bindings that are hard for me to remember but the community is kind of also jumping in and adding support for like third party stuff this is the first party support by the way so the CI as you can imagine to test and validate every single is a night but there is like a small detail I think we are forgetting to add usage is also like super fast right it's if not the fastest to my knowledge I think we didn't forget it was obvious yes I have there are a few things that are true about C++ I would say and maybe c as well uh those languages give you a lot of flexibility to write software and you see this pattern often times in software where people start implementing stuff let's say in Python and I do that a lot and then they switch for like a more efficient native implementation but I tend to think C++ is not one language or uh it's like a more of a combination of different language features and every expert or like senior developer selects his subset and there are few things you can do um combined they result in exceptional software like if you know how to use CPU caches effectively how to avoid memory allocations uh heavy template meta programming which affects the compile time negatively but may help you with runtime and then the last but definitely not the least of the Fe things we do that maybe differentiates our software from most of it is we are ready to write assembly when we need to so we write assembly for x86 for arm and for Cuda M uh we also write custom drivers for storage and networking meaning that we need when we need to replace abstraction layers there's nothing at which we will stop no matter how long it takes to design it if it's top performance we're going to build it that's awesome approach and I I remember during one of your talks uh you were like doing this like analysis of system calls right for like F I think implementation it was or something I'm not sure exactly but you you you you were seeing the calls and you like observed that there is some kind of like a bottleneck right and this led you to this idea that maybe the allocation is is not optimal for like this graph creation and that's pretty mind-blowing I must say this level of attention to detail and how you I mean how you think and how you approach the problem well when you start doing it for the first time uh well first you read a lot of code uh after that time like uh spent reading it you may want to run it you you start profiling it profiling native code is very complicated uh especially when the code bases are uh convoluted so for example like the last couple of days I was poking around pytorch code base and pytorch is one of those examples of very convoluted code bases with a lot of obstruction layers obstructions yeah so you either at the top layer or the bottom layer when where you write kernels but understanding how the Machinery in between works is really hard that's why I believe in this very rare kind of software um that can be short enough to understand and comprehend and maybe teach over a couple of hour long course but still performant maybe state-ofthe-art performance so [Music] one thing is to design working piece of software another level of commitment is making the working P of piece of software elegant the next level would be short M and then once it's short and elegant and working the top tier of software would also be extremely performant and uh we hope to get to the standard that's pretty pretty amazing uh when you were talking about that I I it reminds me of this like uh recently Andre kPa released this like LMC implementation right and it's I think similar philosophy like there's more and more people trying to samplify things or reduce the number of abstraction flatten them or just see uh how we can improve things and uh it seems like there's a a good Trend in that direction I totally agree yeah but I'm I'm wondering for a when we are talking about open source projects it makes sense but for a company how sustainable it is to try to replace every big B piece of you know of the stock in networking for example oh that's a good question to which I may not have the answer so uh there are some companies that have been uh particularly successful at commercializing open source uh that may not necessarily go in the Align Direction with the original author of the soft of the software itself um I think that's one of the big obstacles we have to solve as an industry how we incentivize open software MH right and the implications might be much wider than just the world of engineering and writing code uh I noticed that well of course science is becoming more computational uh and when we look at uh all of those scientific domains let's say biology we have all those Pharma companies who prefer to do all of the Innovation behind the closed doors with a lot of patents with constant uh legal battles I think if we manage to solve this for for it we may be able to later standardize it for other disciplines and imagine the world where the Pharma will be more like uh Wikipedia or Linux where a lot of parties who often disagree and have very strong opinions still work together in a shared Mission yeah I agree um I wanted to maybe step back a little bit and return to we have you search search Vector search we have UST store U form could you maybe describe how those pieces fit together for example what is uform and how it's like uh what's the output of that what's the input maybe and if the output of one could be input to another one or of course so um today um of course there are millions of applications that are built with different data processing Technologies but in reality I think there are very few core pieces of infra that practically every appc needs uh one of that would be a storage layer another one would be an indexing layer to be able to fetch the relevant pieces of information quickly uh another one would be a semantic layer uh and this is such a overloaded word uh but that's what uform is and the last but not least would be networking so UK call is a networking Library it allows you to separate the server logic from the client so yes of course we want to make portable software that can run on a phone on a watch on any iot device but sometimes of course you need to access remote information you want to call a server that does certain work for you that's where you call comes in where you delegate work from your client application to a server sending often times a Json RPC request uh then on the other end of this pipeline the server might be interested in like processing some information and giving you an intelligent response processing this information it may end up working with a mixture of structured and unstructured data dealing with structured data is generally quite easy we do a lot of if statements a structured data would be like a document that is already parsed like a Json or an XML or an HTML tree and what you would do with that uh you would write those if statements for Loops whatever and build your business logic but when you deal with unstructured data you need some ideally Universal form to represent all of this unstructured information before you can connect it to your logic pipelines That's What U form is universal forms uh we take images we take text as the two original and like most commonly used modalities of data that's the term the AI Community loves modality multimodel yeah multimodel so uform is a family of pocket sized multimodel AI models uh that we are uh building for vision language domains M uh one of the features of those models is that very small they can run on both small devices and the large ones they are multimodal meaning they can deal with images texts and hopefully very soon video domain as well uh they are capable of producing embeddings so that you can search through the semantics uh or like through a semantic space of those uh multimedia documents uh but they some of them are also generative they can produce like a textual output or description on of an image or answer questions about a visual domain um they are multilingual often times not all of the models sadly it's so like somewhat compute constraint in our case being an open source movement uh we don't always have the resources of some of the commercial entities but we take a lot of pride in our International background and uh we love how amazing all the different languages are so some of our models are trained on a balanced data set meaning equal representation of every language in a training set generally this would constrain to like 20 languages or so so let's say equal um amount of content in English and Chinese Russian and Arabic German and French Spanish and Portuguese uh and like all kinds of other language pairs so with this models you can produce let's say an embedding that will allow you to navigate vast volumes of images and find similar ones so let's say someone can just type in a query like three guys sitting in a room in a pink room and talking about technology and even if you have billions or hundreds of billions of entries with you search the vector search engine you can navigate the embeddings of all those different images and video frames and textual documents and find the right one and then the last but not the least when you know the identifier of the document that you're interested in you can fetch it from a storage system pulling in the Raw original and streamming it back to your user that's where ustore comes in so this is uh not the not an exhaustive list of the stuff that we've built over the last few years but it felt very natural that we should start with open sourcing those four Technologies and you also mentioned during uh during your talks that uh you did some like an interesting applied some interesting techniques for example you reduced the number of Dimensions right yeah H and and because of that you can be faster and more memory efficient if I'm not mistaken and it will take less space to like uh contain those information sure um that's that was pretty interesting like a mind-blowing in a way uh very high low level we are are not magicians we cannot solve every problem in a computer but sometimes we find ways around them and one of the most obvious fundamental limitations in modern computers is uh the latency of the memory system so when we talk about latency uh we often think about the latency of accessing remote systems like you build a app and then it takes like 100 milliseconds to fch an image from a remote server because the server is somewhere in a different country and then you add a CDM a Content delivery Network to cash something in your buy so that the latency drops let's say to 20 milliseconds or something like that and the user doesn't feel the difference in my case at least the last few years we operate at different level of latencies it's not 20 milliseconds versus 100 milliseconds but I can spend a week optimizing an operation that takes 10 n seconds for it to go as fast as like 5 NS and the reaction that would generally come up is like why the hell a person with a fairly High hourly rate would spend seven days working on operation that takes 10 NS and optimize it down to five well the answer is some of those operations are executed billions of times per second on tens of thousands hundreds of thousands maybe millions of machines worldwide actually maybe even billions uh so no matter how much time and effort you put in optimizing them there is still um a positive effect and one of my favorite examples is my strings Library I don't know if you've seen that guys string Z yes that's it so when we think about programming every single language has like numerical types like floating points or integers and that kind of stuff uh so float and int is implemented in Hardware so the language just provides you an abstraction to call into the specific assembly instruction on the CPU side that takes those eight btes and those eight bites and then the arithmetic logic unit does like all the complicated circuitry to shift the bites intersect them in different ways and then give you the answer the first non-trivial type that H that appears in every language or class or structure is a string that's the first thing people Implement in software and I kid you not even there there's like so much space for optimization in every single EOS system I've seen so string Zilla was born about five years ago for a conference talk uh as a proof of statement uh that strings and software can be so much more efficient and to make a point I took the most basic thing every single programming language has a string and I optimized the hell out of it uh so I took the seed assembly which is single instruction multiple data that's the kind of assembly instructions that's kind of makes up maybe 90% of modern assembly yet almost no modern compiler knows how to generate it comp correctly and I rewrote all the relevant string operations with that now even today even five years after this talk I let's say Benchmark string Zilla against leap C which is the C standard Library kind of the state of the art in that domain out of the standard libraries and for substring search something as basic as that even on arm which is in everyone's pocket it's a few billion CPUs running on arm the difference between string Zilla and lipy is 3 and a half X in throughput so it's like 10 gab per second versus three and why do you think is is that I mean why why people don't use those like why don't they improve things or or what maybe it's like a psychological or a philosophical question here but uh there's definitely a psychological and maybe economic uh issue here uh like we definitely have unstructured incentives in our community properly um and that's an easy way to get into flame War I guess if you're into those you should check out uh Twitter uh for FFM Peg and uh some of those discussions so there's a fairly small pool of people who are ready to write assembly um pure assembly is sometimes uh a steep learning curve a much easier way to enter would be seeing intrinsics intrinsics are like practically a layer between C and assembly that looks like C but in reality is kind of assembly and aside from implementing the library in those you have to provide a lot of good examples documentation and BL posts around how this works how it's structured so like even in string Zilla for example the read me the main read me not the only one of the library is 64 kiloby of text so with diagrams with code sections with examples and the source code is also very deeply documented in the C code base in the rust binding and Swift binding in Python binding to encourage people to learn this stuff and um contribute or maybe just Fork it and reimplement some stuff that they think they can do better so um there is a complexity in learning this there is a complexity in maintaining it sometimes I would hear uh statements like uh this is too much effort but to me it feels like it's worth it if strings are in every single programming language in every single program and if you're Library runs on every single CPU we run on arm uh on x86 we run and that's specialized assembly we have serial non-sp specialized code that can run on any other CPU be it 32bit 64bit little Indian or big Indian we run on IBM mainframes from 20 years ago to give you a perspective of how broad the coverage is uh let's say you search has binding to 10 programming languages one of them is Python and then when you just take python you think the most uh crossplatform python package would be numpy but then you check the binaries that numai ships being just a Bloss Rober they don't ship their own assembly they cover 35 targets string Zilla and simm for example another one of my libraries it ship 105 binaries just for python alone on every single release so there's a lot of room for improvement in our software pipelines and I would definitely encourage everyone to check out the blogs of some of the other great Engineers who write about the stuff maybe last question a bit random question and maybe still philosophical when I approached you after after you talked I started talking about risk five yes this is my like a little big passion I'm I'm not super knowledgeable about risk 5 but what I like about risk 5 is that promise of like applying open source principles to processors to Hardware in a way that you can you don't have to worry about Legacy staff like supporting Legacy registers or like architectures and you can in a way start from a clean State and I just wanted to ask you your opinion or uh like um what do you think about that what do you make I'm very excited about risk 5 as you can imagine assuming I was just mentioning uh like the potential impact of Open Source and what would happen if we apply the same principles in biology and Pharma and the other domains Hardware is another one of them uh today uh we live uh in this ooples where we have just a couple of found companies a couple of uh chip design companies and a couple of Chip design software companies that kind of control the whole Pipeline and it's not just about the dependency on those companies uh I like their work there are some amazing people working and running those companies but it definitely constrains in ation it uh constrains the ability to reason about the system it negatively affects the security um risk five is a very interesting instruction set it also has some of those simd instructions which attracts people like me uh who love to brag about how bad the compilers are and how good the assembly can be uh I've seen just one library that is about to get support for Native risk 5 vector iions it's called steam DF another great Library worth supporting and uh uh maintaining not mine it's by Daniel and a few other Daniel imir and a few other great Engineers uh but yeah risk five is absolutely amazing I wish I had uh a server running on risk 5 so that I could prototype my software on it uh we'll have to wait for a few years because it seems that this would be perfect right software optimization and Hardware optimization at the same time and it could like yeah it's the middle but we have to just be very clear here that if you want to design Hardware spec like uh if you want to co-design the hardware with a software you would still not be able to do it on risk 5 directly so you would probably need an fpga to test the hardware kernels and then you would probably like over time plan the tape out process and yeah you may end up designing a new risk 5 feature and then tape it out uh as like an extension of an open risk 5 Core so it will be a multi-year process yeah but sounds very exciting yeah definitely exciting and a lot of space for people who love to think Beyond abstractions absolutely Ash thank you very much for the for joining us today for this amazing interview uh we learned a lot uh I love this kind of interviews and uh good luck with your projects and low level but lots of information yeah thank you guys we have a lot of high level stuff as well and we are very excited about seeing more developers coming from JavaScript typescript Python and other places and just like with the rest of our ecosystem for example this week I'm porting uform and other libraries to typescript and JavaScript and we couldn't be more excited to see more developers adopting absolutely maybe we can help yeah maybe we could help in a way I think that woulding the magic of Open Source thank you Ash thank you guys
Video description
In this interview, we talk with Ash Vardanian, the founder of Unum, a company that creates AI and programming tools. Unum is a company that offers a suite of efficient data-processing tools tailored for AI and semantic search applications. Their key offerings include: - UForm for multi-modal data processing (text, images, etc.), https://github.com/unum-cloud/uform#uform - UCall for providing ACID-transactional capabilities, https://github.com/unum-cloud/ucall - USearch as a vector search engine for similarity search on large datasets, https://github.com/unum-cloud/usearch - UStore - a modular, multi-modal, and transactional database system designed to store and process different data modalities like blobs, documents, graphs, vectors, and text. https://github.com/unum-cloud/ustore UStore integrates with USearch, UForm, and popular data science libraries, offering vector search, transactional support, and a unified solution for AI and semantic search applications requiring efficient handling of diverse data types. Ash shares his journey, from his early days in programming to leading in AI technology. He explains how Unum’s tools like uSearch, uForm, uCall, and uStore help make AI work faster and better. Ash discusses how he improves data search efficiency and reduces computer process delays. He also talks about the benefits of open-source projects, where anyone can contribute and improve the tools. Ash shares his thoughts on the future of AI and the potential of open-source hardware, like RISC-V, which can make computers more powerful and customizable. This interview provides insight into how Ash and Unum are making advanced AI technology accessible and efficient for everyone. Ash''s website: https://ashvardanian.com Ash's YouTube: https://www.youtube.com/@UCI7fuiwVwAtI_3I89BrT7qw Ash's X: https://x.com/ashvardanian