https://www.linkedin.com/feed/update/urn:li:activity:7197667170128945152Abu Dhabi Investment Authority (ADIA) | Quantitative Research & Development Lead
https://www.linkedin.com/in/gautier-marti-344b565a/Pandas code is now 50x faster on Google colab with zero code changes.
It now comes with native integration for RAPIDS cuDF, which powers GPU acceleration for pandas.
To boost your pandas code, use this single command at the top of your NVIDIA GPU-enabled Colab notebook:
%load_ext cudf.pandas
This feature can even accelerate the pandas code/queries generated by Colab AI, ChatGPT or any LLM-based chatbot, without needing to learn new paradigm.
Colab demo notebook: https://lnkd.in/gVSgsirJ
↓
Are you technical? Check out https://AlphaSignal.ai to get a weekly summary of the top trending models, repos and papers in AI. Read by 180,000+ engineers and researchers.
https://media.licdn.com/dms/image/D5605AQFvkvx880miwg/videocover-low/0/1716045126654?e=1716699600&v=beta&t=rdPE9BJi4rmu2QJefKF8UC1j_i_RptXPcYFhiRVmbJg https://www.linkedin.com/feed/update/urn:li:activity:7197703974618116096Chief Quantitative Analyst |
https://www.linkedin.com/in/ant1savine/Now that the flow of testimonies about the passing of the legendary Jim Simons has faded, I think I can talk a bit about him. I noticed, by the way, that those I know who knew him the best remained rather silent, I guess out of respect for his stature.
Jim was one of the first people I met when I started studying financial math in 1994, as an old acquaintance of my father Adrien Douady.
It is difficult for me to speak of Jim without mentioning Dennis Sullivan, one of his closest collaborators and friends, himself a very close collaborator and friend of my father - almost an uncle to me (Dennis was awarded the Wolf Prize in 2010 and the Abel Prize in 2022, the equivalent for mathematics of a Nobel prize). This is to show the altitude in mathematical research at which Jim was flying. It is a bit delicate to make comparisons, but it needs to be said how different math research of this kind is to what we commonly call "financial engineering".
An anecdote about this: when I arrived in New York, and we decided, with the late Marco Avellaneda to start our math finance seminar at the Courant Institute (NYU) in 1995, we met Jim to ask for his support for the seminar (Renaissance Technology was already a very famous hedge fund). He was very kind and welcoming, then he asked: "By the way, what is mathematical finance?". He truly had no clue. For him, the math of financial markets was about statistics, microstructure, information theory. Pricing an option, the Black-Scholes model, wasn't his cup of tea.
We know Jim for #Renaissance and its flagship #Medallion fund. People speak of Chern-Simons classes, without really knowing what that means (some invariants in the algebraic classification of fiber bundles, but I'd need to explain what is a "fiber bundle", which I'm not going to do here, please check Wikipedia). Less known is the fact that he was part of Shannon's team on info
https://media.licdn.com/dms/image/D4E22AQHuqmU0O8Mlzw/feedshare-shrink_800/0/1716062691721?e=1718841600&v=beta&t=2kATIFCC6y704no-ya29AJgF7V8LpOwxlKvN9jIP5Y4 https://www.linkedin.com/feed/update/urn:li:activity:7197672586149814272University of Oxford | Associate Professor
https://www.linkedin.com/in/blanka-horvath-56482873/I’d like to thank Professors Rama CONT, Blanka Horvath for inviting to speak at the Oxford Mathematical Institute seminar, and for moderating the talk.
I was excited to see a lot of interest in the audience for the use of information geometry and decision theory in portfolio construction. Thanks to all attendees for your questions and interesting conversations after the talk!
https://media.licdn.com/dms/image/D4E22AQFaJi8mCqPIxg/feedshare-shrink_800/0/1715758242917?e=1718841600&v=beta&t=FrljY0ndt6BjFXUc21FhAJoKIAruMbOYnZGMTs9tpt8 https://www.linkedin.com/feed/update/urn:li:activity:7197924394302586880Barclays Investment Bank | Vice President
https://www.linkedin.com/in/hariomtatsat/Attention is All You Need" published by Google AI in 2017 is widely considered a foundational piece for building Large Language Models
Here's why it's so important:
Transformer Architecture: The paper introduced the Transformer, a novel neural network architecture. This architecture revolutionized how models process text data. Unlike previous models, the Transformer uses a self-attention mechanism, allowing it to understand the relationships between words in a sentence more effectively.
Unsupervised Learning: The Transformer also facilitated unsupervised learning for LLMs. This means LLMs could be trained on massive amounts of text data without the need for manually labeled examples, significantly improving their ability to learn language patterns.
While the Transformer isn't the only factor in LLM development, it's a major building block. Many of the most powerful LLMs today, including Gemini, utilize Transformer-based architectures as their core.
Checkout our course Machine Learning for Finance -https://lnkd.in/gtJDWcus
Separate modules for each AI and Machine Learning Type with exhaustive concepts.
15+ Real-World Practical Applications
Financial Applications Coverage
- Algo Trading
- Portfolio Management
- Fraud detection
- Lending and Loand Default prediction
- Sentiment Analysis
- Derivatives Pricing and Hedging
- Asset Price Prediction
- and many more
Course Description
Supervised Learning
Regression and Classification models
1. Linear and Logistic Regression
2. Random Forest and GBM
3. Deep Neural Network (including RNN and LSTM)
Includes 6+ case studies
Unsupervised Learning
Clustering and Dimensionality Reduction
1. Principal Component Analysis
2. k-Means and hierarchical clustering
Includes 5+ case studies
Reinforcement Learning and NLP
Value/Policy based RL models and sentiment analysis
1. Deep Q- Learning RL model
2. Policy-based RL models
https://www.linkedin.com/feed/update/urn:li:activity:7197836552545153025Abu Dhabi Investment Authority (ADIA) | Quantitative R&D Lead
https://www.linkedin.com/in/lehalle/It seems to me that before "urgently figuring out how to control AI systems much smarter than us" we need to have the beginning of a hint of a design for a system smarter than a house cat.
Such a misplaced sense of urgency reveals an extremely distorted view of reality.
No wonder the more based members of the organization seeked to marginalize the superalignment group.
It's as if someone had said in 1925 "we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of sound over the oceans."
It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop.
Yet, we can now fly halfway around the world on twin-engine jets in complete safety.
It didn't require some sort of magical recipe for safety.
It took decades of careful engineering and iterative refinements.
The process will be similar for intelligent systems.
It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don't confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence).
It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.
https://lnkd.in/eaJ5uuMk
https://media.licdn.com/dms/image/D4E22AQGIXMp66PFEqg/feedshare-shrink_800/0/1716066822967?e=1718841600&v=beta&t=DstV_fVkKAheKv_vT7nzoWiUp7Vg4RCck3hdzjktqJk https://www.linkedin.com/feed/update/urn:li:activity:7197927565104189440Abu Dhabi Investment Authority (ADIA) | Global Head - Quantitative Research & Development
https://www.linkedin.com/in/lopezdeprado/I’ve worked in Data Science for a while. My journey into that field has been almost completely self taught. In my learning I have prioritized what is effective and works best, rather than some fancy high end tools or techniques that add unnecessary complexity. From everything I’ve seen over the years, here are my main takeaways:
* Python is good enough for 99.9% of tasks
* Jupyter is good enough for 99.9% of tasks
* Storing tabular data in CSV files is good enough for 99.9% of tasks
* Modeling your tabular data with XGBoost is good enough for 99.9% of tasks
* Working on your own laptop is good enough for 99.9% of tasks
* Working on CPU is good enough for 99.9% of tasks
* Installing libraries on bare metal is good enough for 99.9% of tasks
https://www.linkedin.com/feed/update/urn:li:activity:7197816932509577217aiXplain, inc. | Head of AI Lab | Sr. Principal Architect
https://www.linkedin.com/in/kyuksel/Ilya Sutskever of OpenAI gave John Carmack following reading list of approximately 30 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today in AI.’ I have added few more LLM papers that potentially fills remaining ~9%
Here's Ilya's list: links here
https://lnkd.in/gVPEEejJ
1. The Annotated Transformer
2. The First Law of Complexodynamics
3. The Unreasonable Effectiveness of RNNs
4. Understanding LSTM Networks
5. Recurrent Neural Network Regularization
6. Keeping Neural Networks Simple by Minimizing the Description Length of the Weights
7. Pointer Networks
8. ImageNet Classification with Deep CNNs
9. Order Matters: Sequence to Sequence for Sets
10. GPipe: Efficient Training of Giant Neural Networks
11. Deep Residual Learning for Image Recognition
12. Multi-Scale Context Aggregation by Dilated Convolutions
13. Neural Quantum Chemistry
14. Attention Is All You Need
15. Neural Machine Translation by Jointly Learning to Align and Translate
16. Identity Mappings in Deep Residual Networks
17. A Simple NN Module for Relational Reasoning
18. Variational Lossy Autoencoder
19. Relational RNNs
20. Quantifying the Rise and Fall of Complexity in Closed Systems
21. Neural Turing Machines
22. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
23. Scaling Laws for Neural LMs (arxiv.org)
24. A Tutorial Introduction to the Minimum Description Length Principle (arxiv.org)
25. Machine Super Intelligence Dissertation (vetta.org)
26. PAGE 434 onwards: Komogrov Complexity (lirmm.fr)
27. CS231n Convolutional Neural Networks for Visual Recognition (cs231n.github.io)
28. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
29. BitNet: Scaling 1-bit Transformers for Large Language Models
30. KAN: Kolmogorov-Arnold Networks
Here are my recommended key LLM papers I would add on (the GPT, Llama, and Gemini paper https://media.licdn.com/dms/image/D5622AQEEUMc38PIIqA/feedshare-shrink_800/0/1716006442748?e=1718841600&v=beta&t=_RU6k2yvBhC-x4semVgZGIiURKgoVJonRaDfK3XMJPA https://www.linkedin.com/feed/update/urn:li:activity:7197459965622509568Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/Ilya Sutskever gave John Carmack this reading list of approximately 30 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today.’
Here's Ilya's list: https://lnkd.in/gHfsWd_u
Here are the key LLM papers I would add on (the GPT, Llama, and Gemini papers):
GPT-1: https://lnkd.in/gJ5Pe3HG
GPT-2: https://lnkd.in/gatQi8Ud
GPT-3: https://lnkd.in/g43GzYfZ
GPT-4: https://lnkd.in/ga_xEpEj
Llama-2: https://lnkd.in/gutaGW8h
Tools: https://lnkd.in/gqJ3aXpS
Gemini-Pro-1.5: https://lnkd.in/gbDcYp89
And a final recommendation - Ng's agentic patterns series:
https://lnkd.in/gphZ6Y5s
https://www.linkedin.com/feed/update/urn:li:activity:7197763819475992578Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/Using open source software does not entitle you to a vote on the direction of the project. The gift you've received is the software itself and the freedom of use granted by the license. That's it, and this ought to be straight forward, but I repeatedly see that it is not (no matter how often it is repeated). And I think the problem stems from the word "community", which implies a democratic decision-making process that never actually existed in the open source world.
First of all, community implies that we're all participating on some degree of equal footing in the work required to further the welfare of the group. But that's not how the majority of open source projects are run. They're usually run by a small group of core contributors who take on the responsibility to advance the project, review patches, and guard the integrity of the vision. The division of labor isn't even close to be egalitarian. It's almost always distinctly elitist.
That's good! Yes, elitism is good, when it comes to open source. You absolutely want projects to be driven by the people who show up to do the work, demonstrate their superior dedication and competence, and are thus responsible for keeping the gift factory churning out new updates, features, and releases. Productive effort is the correct moral basis of power in these projects.
But this elitism is also the root of entitlement tension. What makes you think you're better than Me/Us/The Community in setting the direction for this project?? Wouldn't it be more fair, if we ran this on democratic consensus?? And it's hard to answer these question in a polite way that doesn't aggravate the tension or offend liberal sensibilities (in the broad historic sense of that word -- not present political alignments).
So we usually skirt around the truth. That not all participants in an open source project contribute equally in neither volume nor
https://media.licdn.com/dms/image/sync/D4E27AQEF9dGZ5cnabw/articleshare-shrink_480/0/1715889018329?e=1716699600&v=beta&t=IPy6qc8Ly3S_EewEzJIxANNpzekBCr_5qmD3CeQPREs https://www.linkedin.com/feed/update/urn:li:activity:7197606485353246720Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/#VectorEmbeddings are the backbone of AI/ML applications.
Machines don't understand human language & that is where we need embeddings.
LLMs store the meaning and context of the data fed in a specialized format known as embeddings. Imagine capturing the essence of a word, image or video in a single mathematical equation. That’s the power of vector embeddings — one of the most fascinating and influential concepts in machine learning today.
For example, the images of animals like cat and dog are unstructured data and cannot be directly stored in a database. Hence, they will be converted into machine readable format, that's what we call embeddings and then stored in a vector database.
By translating unstructured and high-dimensional data into a lower-dimensional space, embeddings make it possible to perform complex computations more efficiently.
Types of Embedding:
While most of us have commonly used text embedding, Embeddings can also be utilised for various types of data, such as images, graphs, and more.
⮕ Word Embeddings: Embedding of Individual words. Models: Word2Vec, GloVe, and FastText.
⮕ Sentence Embeddings Embedding of entire sentences as vectors that capture the overall meaning and context of the sentences. Models: Universal Sentence Encoder (USE) and SkipThought.
⮕ Document Embeddings Embedding of entire sentences capturing the semantic information and context of the entire document. Models: Doc2Vec and Paragraph Vectors.
⮕ Image Embeddings — captures different visual features. Models: CNNs, ResNet and VGG.
⮕ User/Product Embeddings represent users/products in a system as vectors. Capture user/products preferences, behaviors, attributes and characteristics. These are primarily used in recommendation systems.
Below are some common embedding models we can use.
⮕ Cohere’s Embedding: Powerful for processing short texts with under 512 tokens.
⮕ Mistr
https://media.licdn.com/dms/image/D5622AQHdY-qVjqr24w/feedshare-shrink_2048_1536/0/1716013195682?e=1718841600&v=beta&t=g9nzIpYQV0eaqUsIirMwwzSwR-MKKpbu1dhNYkoL1jM https://www.linkedin.com/feed/update/urn:li:activity:7197633346024144896Barclays Investment Bank | Vice President
https://www.linkedin.com/in/hariomtatsat/Here are the Key takeaways from the paper "MACHINE LEARNING APPLIED TO ACTIVE FIXED-INCOME PORTFOLIO MANAGEMENT: A LASSO LOGIT APPROACH"
Key Take ways and Findings
Methodology
Comparison of Strategies: The study compares a machine learning algorithm, Lasso logit regression, with a passive buy-and-hold strategy for active duration management of high-grade US treasury bond portfolios.
Two-Step Procedure: A two-step procedure is introduced to improve model robustness, incorporating ensemble averaging to mitigate overfitting.
Threshold Selection: A new method for selecting thresholds based on conditional probability distributions is proposed to convert model probabilities into actionable signals.
Variable Selection: A broad set of financial and economic variables is used as inputs, focusing on those related to financial flows and economic fundamentals.
Findings
Variable Relevance and Stability: Financial flow and economic fundamental variables were most relevant, but their significance was unstable over time.
Model Performance: Backtesting on a US dollar-denominated sovereign bond portfolio showed the Lasso logit model's small but statistically significant outperformance over the passive benchmark, after controlling for overfitting.
Checkout our course Machine Learning for Finance -https://lnkd.in/gtJDWcus
Separate modules for each AI and Machine Learning Type with exhaustive concepts.
15+ Real-World Practical Applications
Financial Applications Coverage
- Algo Trading
- Portfolio Management
- Fraud detection
- Leanding and Loand Default prediction
- Sentiment Analysis
- Derivatives Pricing and Hedging
- Asset Price Prediction
- and many more
Course Description
Supervised Learning
Regression and Classification mdoels
1. Linear and Logistic Regression
2. Random Forest and GBM
3. Deep Neural Network (including RNN and LSTM)
Includes 6+ case studies
Unsuper
https://www.linkedin.com/feed/update/urn:li:activity:7197884673178378242Adelomyia Technologies GmbH | Research Analyst
https://www.linkedin.com/in/sarem-seitz-647732134/Normale Menschen:
Schau mal, ich bin in Hogwarts!
Menschen auf LinkedIn:
Hogwarts ist wie unternehmerischer Erfolg - du musst mit Vollgas und 100% Überzeugung durch eine gemauerte Wand rennen, um anzukommen.
Was mir der Besuch der Harry Potter Studios über Business gelehrt hat:
Nichts.
Dafür jubelt mein zwölfjähriges Ich noch immer laut vor Begeisterung.
#copywriting #contentmarketing #marketing https://media.licdn.com/dms/image/D4E22AQFAuUu7OVkB-A/feedshare-shrink_800/0/1716107575569?e=1718841600&v=beta&t=l41W5rgHEyJe9D2G-U3z9zyMdE071VRNqZXQv__m-gE https://www.linkedin.com/feed/update/urn:li:activity:7197668667176669184Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/A new paper from Spain applies KAN for time series analysis
“In this paper, we have performed an analysis of KANs and MLPs for satellite traffic forecasting. The results highlighted several benefits of KANs, including superior forecasting performance and greater parameter efficiency. In our analysis, we showed that KANs consistently outperformed MLPs in terms of lower error metrics and were able to achieve better results with lower computational resources. Additionally, we explored specific KAN parameters impact on performance. This study showcases the importance of optimizing node counts and grid sizes to enhance model performance. Given their effectiveness and efficiency, KANs appear to be a reasonable alternative to traditional MLPs in traffic management.”
#kan #timeseries #forecasting https://media.licdn.com/dms/image/D4E22AQHEj0gzjT1Qhg/feedshare-shrink_800/0/1715932325739?e=1718841600&v=beta&t=IALAwF9QD9yM1mqqNbpNdmjF8RrCANsJeaG-geRRCm4 https://www.linkedin.com/feed/update/urn:li:activity:7197700871499980800University of Zurich - Department of Banking and Finance | Professor, Chair in Financial Engineering
https://www.linkedin.com/in/markus-leippold-578bb95/We cannot achieve our #netzero and #sustainabilitygoals unless we bring the whole world with us. The bulk of investment for net zero ($2 trillion per year) will need to happen in emerging and developing economies and these economies alone require investments totalling 4 trillion a year to meet the sustainable development goals.
Closing this gap is a win-win for society and #investors and requires action at all levels, from removing artificial barriers in regulations of high income countries, deepening local capital markets, powering up DFIs to take more risk and empowering local public and private banks, to investing in better data and analytics and exploring new models of partnerships between the public and private sector.
I’ve had the enormous privilege over the past 18 months to work with the European Commission and fantastic experts from around the world to develop solutions to mobilise sustainable investment in EMDEs as a member of the High-Level Expert Group on #Sustainable #Investment.
We published our final reports a few weeks ago and I urge both policymakers, financial institutions and third sector organisations alike to read it: https://lnkd.in/eJ8yhn6V
What is unique about this analysis and set of recommendations is its breadth and depth; taking a holistic view of how we can unlock barriers along the investment chain, with a focus on #transitionfinance, #nature, #innovation, #sustainableinfrastructure, #sustainablevaluechains and #adaptation.
Fundamentally, we need to power up the global financial system to support the transition to #naturepositive, #resilient, #greeneconomies in all countries, and all actors have a role to play. We lay out a clear roadmap for how to do this.
I’m hugely grateful for the opportunity to work with and learn from fantastic colleagues: Ayaan Zeinab Adam Senida Mesi; Zalina Shamsudin Antoni BALLABRIGA; Alice Ruhweza;
https://media.licdn.com/dms/image/D4E22AQG5WHzNZoiehA/feedshare-shrink_2048_1536/0/1716049268154?e=1718841600&v=beta&t=X_aTQz9RxSABUWIEzUW1No8GZw30d_1XuQQTZPNg_5A https://www.linkedin.com/feed/update/urn:li:activity:7197863274242011136Abu Dhabi Investment Authority (ADIA) | Quantitative Research & Development Lead
https://www.linkedin.com/in/gautier-marti-344b565a/#Industrial #AAD
Algorithmic Automatic Differentiation (#AAD) is a technique that allows you to compute the #sensitivities of any function 𝑓with 𝑛 variables (𝑥1,𝑥2,…,𝑥𝑛) at a #computation #cost of just 4 calls of the function 𝑓 independent of 𝑛, which is for me always like #magic !
https://lnkd.in/dBAVBaS6
Applying AAD to #local #volatility vega KT is a remarkable demonstration of this theory, bringing both speed and quality improvements at an industrial level.
https://lnkd.in/dKX9mnrs
https://lnkd.in/djika-pY
This technique is one of the key pillars for Neural Network development, particularly in the area of backpropagation.
AAD can also #improve pricing, not just sensitivity calculations, by interpreting prices as sensitivities using perturbation techniques.
https://lnkd.in/drkb_n7X
https://lnkd.in/eDwNt3B8
https://lnkd.in/dAg_tNAN
https://lnkd.in/dGfrXzb3
In this final work, we adopt a #partial approach, drawing on Pontryagin's key idea that calculating sensitivities is essentially finding the Jacobian and matrix multiplication. By organizing these calculations, we achieve the necessary optimization in the very challenging LSV set up.
In this paper, we conduct a #partial #backward #calculation of AAD, avoiding the challenging part of calibration in the LSV model.
I am thrilled to share with my network this #collaborative work from my previous job with my esteemed colleagues Abdessamad Sahnoun, leopold FONGANG, Marouen Messaoud, Mahi Rida, William LEDUC, which has finally been published
https://lnkd.in/dt7kXH2N
https://lnkd.in/dhnt2b-7
This opens up new #industrial methods for implementing AAD when the calibration is challenging. The same technique can be applied to nonlinear dividends and local correlation to cite some important examples, not only to compute price impacts bu https://media.licdn.com/dms/image/sync/D4D27AQH8HFSw1GMCpg/articleshare-shrink_800/0/1716103116583?e=1716728400&v=beta&t=zR47c1PStfflECzC3u13UtB3T5kNGk3hH2z5ZliBI-g https://www.linkedin.com/feed/update/urn:li:activity:7197688657875533825Amundi | Head of Quantitative Research
https://www.linkedin.com/in/thierry-roncalli-78a98b12b/Exceptional line-up of speakers for the next session of the Quantitative Sustainable Economics and Finance seminar of Ecole Polytechnique and ENSAE !
On May 23rd, from 11h15 to 13h00 (note the unusual timing), in room 3105 at CREST - Center for Research in Economics and Statistics
- Chiara Colesanti Senni (University of Zurich and LSE) will talk about Nature risk pricing
- Emanuele Campiglio (University of Bologna) will present his paper "Warning words in a warming world"
This in-person only seminar is open to all, takes place once a month on Thursdays and features research presentations addressing sustainable/green economics and finance issues through quantitative approaches.
See the dedicated web page sites.google.com/view/qsef for the full schedule and subscribe to our mailing list by sending an email
#greenfinance
https://media.licdn.com/dms/image/D4E22AQEWEn4-aHv-rw/feedshare-shrink_800/0/1715927113931?e=1718841600&v=beta&t=GILycfd9F7WjUCLspssTtrc3zKKINI7UGnTKokvLA1k https://www.linkedin.com/feed/update/urn:li:activity:7197928218610302976Abu Dhabi Investment Authority (ADIA) | Global Head - Quantitative Research & Development
https://www.linkedin.com/in/lopezdeprado/Si me preguntan quien inspiró la idea de fundar elementoalpha acá lo tienen, lei la biografía de el y me dije , vamos a hacerlo cuando llegue el momento , y el momento llegó en enero 2019 cuando fui despedido . Y aca estamos en el 2024 . les invito a ver este video . Y que crean en ustedes. Fe , fe , fe y trabajo. Hay que creerse que somos buenos que nuestra idea que nace del corazón y el estomago tendrá exito. https://media.licdn.com/dms/image/D4E05AQHZ5fEp2reL3A/feedshare-thumbnail_720_1280/0/1715864846512?e=1716728400&v=beta&t=LtjnIkwXX0w9DRJq2wlcVsbDjFgXsZ4jfZzcw7zQXhA https://www.linkedin.com/feed/update/urn:li:activity:7197724585000079360Artificial Intelligence Finance Institute - AIFI | Founder at Artificial Intelligence Finance Institute
https://www.linkedin.com/in/dr-miquel-noguer-i-alonso-7242345/Looking forward to #SALTiconnectionsNY24 this coming week in NYC. Some of the worlds most sophisticated investors will be leading an awesome discussion on the challenges and opportunities from the CIO suite. Steven Meier, CFA, FRM Geeta Kapadia, CFA Carine Smith Ihenacho Yana Watson Kakar
CAIA Association NYCERS Fordham University Norges Bank Investment Management Caisse de dépôt et placement du Québec (CDPQ) Holly Duncan-Quinn Joe Eletto John Darsie Anthony Scaramucci Steven Novakovic, CAIA, CFA Sarah Samuels, CFA, CAIA Mark Anson Troy Prince, CAIA
https://media.licdn.com/dms/image/D4E22AQHkBGXacnjpjg/feedshare-shrink_800/0/1714741235738?e=1718841600&v=beta&t=kk-I2xnrUVcynddI1VK2DjicVcSGJHS5jB3b1cHK_jg https://www.linkedin.com/feed/update/urn:li:activity:7197837992088985600Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/Google released their #multimodal #PaliGemma model that can input images and text to produce text. Comparing this to #gpt4o from OpenAI there are 2 options to handle audio inputs as shown in the impressive demos. Take speech in and use Whisper API to get text and feed to LLM. What OpenAI says is they have natively integrated speech embeddings which seems like option 2. Let's wait for technical paper to find out more. #generativeai #largelanguagemodels
PaliGemma blog: https://lnkd.in/dQN45uw5
https://media.licdn.com/dms/image/D4D22AQHB4WTrxz1PqQ/feedshare-shrink_800/0/1716024254819?e=1718841600&v=beta&t=eifT4BdA6sZkD7bkdlvKno7bxaKdxQMKlinYGHfYd-Q https://www.linkedin.com/feed/update/urn:li:activity:7197633191455719425Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/ Unlock the Power of Investment Research with DoTadda!
Are you tired of juggling multiple tools to manage your investment research? Look no further! As an advisor for DoTadda Inc, I'm excited to share how we are revolutionizing the way research investment specialists do their best work.
DoTadda offers an all-in-one platform designed to effortlessly store, present, search, and share all your investment research materials. From internal documents, presentations, and emails to research reports, web links, and even Tweets – DoTadda has you covered!
Key Features:
- Automated Data Capture: Say goodbye to the tedious process of manually collecting data from various sources.
- Seamless User Workflow: No more clunky, antiquated systems. Our platform is designed for a smooth and efficient user experience.
- Enhanced Insight: Connect the dots of hidden and underutilized internal research, allowing every team member to find and leverage valuable insights.
But that's not all! For those who analyze earnings calls, our amazing product Knowledge is a game-changer. Instead of repeatedly listening to calls or sifting through transcripts, use Knowledge to summarize them and ask targeted questions. It's like having a personal research assistant at your fingertips!
Ready to see DoTadda in action? Check out this link based on my work with the R programming language: https://lnkd.in/e5mp7WvE
In order to make your research workflow smarter, faster, and more insightful reach out to Andrew Meister or Michael Hochstat
#InvestmentResearch #FinTech #AI #Automation #ResearchTools #DoTadda #Innovation https://media.licdn.com/dms/image/sync/D4E27AQEdAFH_CSf7RA/articleshare-shrink_800/0/1715608363034?e=1716699600&v=beta&t=SyHqaUBY4llvRMtVljGteXU6IoS-54YH4elNMXieK1E https://www.linkedin.com/feed/update/urn:li:activity:7197619277502328833Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/Resistors dissipate energy into entropy, memristors dissipate energy “out of” entropy. On very large scales, the universe appears to be thermodynamically irreversible, gravity never repels massive objects like stars. On the smallest scales, quantum mechanics built on unitary operators is absolutely time reversible. There is something missing from our models of the middle, the mesoscale, that reconciles global irreversibility with local reversibility.
> From the logical as well as axiomatic points of view, it is necessary for the sake of completeness to postulate the existence of a fourth basic two-terminal circuit element which is characterized by a φ–q relationship. This element will henceforth be called the memristor because […] it behaves somewhat like a nonlinear resistor with memory.
https://media.licdn.com/dms/image/sync/D5627AQE_4Fd-WYHnJQ/articleshare-shrink_800/0/1716028583081?e=1716699600&v=beta&t=0I2wRKe3QD0M0R6e_086YkTeNe55d9C92HwQwMhL45I https://www.linkedin.com/feed/update/urn:li:activity:7197718319532040192Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/LLM Evaluation
I have actively developed and used side by side evaluation tools for various type of evaluations through my career and I find this type of evaluations very valuable in the development of search , recommender systems and many other NLP and AI applications
Here is a release LLM Comparator tool from Google Research that is based on SxS type of evaluations
https://media.licdn.com/dms/image/sync/D4E27AQEh345ZqR0a5g/articleshare-shrink_800/0/1715721159405?e=1716699600&v=beta&t=zK2J4Hmmb5hBU0X42_FZRE75IRnRKqcspnvmloXD2bA https://www.linkedin.com/feed/update/urn:li:activity:7197735798387978240Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/Comprehensive study on fine-tuning Domain-specific LLMs focusing on financial domain.
"Fine-tuning and Utilization Methods of Domain-specific LLMs"
https://lnkd.in/gCFR6M7E
By Dr. Cheonsu Jeong, Principal Consultant & the Technical Leader for AI Automation at SAMSUNG SDS
#generativeai #finetuning #financialservices #largelanguagemodel #domainadaption
https://media.licdn.com/dms/image/sync/D4E27AQGpLA6EhAwwIA/articleshare-shrink_800/0/1716071533757?e=1716699600&v=beta&t=5Y6W2CrawyF6810C3OlIVkOLeB9M1pE6CHA3Oa0Vml4 https://www.linkedin.com/feed/update/urn:li:activity:7197768423441866753Cambridge Judge Business School | Postdoctoral Research Associate
https://www.linkedin.com/in/gatsby-zhang/Social Media Noise: Return Reversal, Informativeness, and Price Efficiency Around Earnings Announcements
https://lnkd.in/gf46ugDM
https://www.linkedin.com/feed/update/urn:li:activity:7197718399542595585Jp Morgan Chase Bank | Managing Director - Machine Learning
https://www.linkedin.com/in/prashantkdhingra/The most successful CDAOs take an offensive approach to data and analytics by framing strategy around business outcomes.
Learn more about which four proven priorities new-to-role CDAOs should focus on to really hit the ground running.
Gartner for IT | #GartnerDA #Data #Analytics
https://media.licdn.com/dms/image/sync/D5610AQEMWlFwGTsUgQ/image-shrink_800/0/1715972401290?e=1716699600&v=beta&t=oYgImK39GnUUdQuzpUT7bw3_eohae6HB00cQLfxDB5o https://www.linkedin.com/feed/update/urn:li:activity:7197617675324248064NVIDIA | Financial Services and Technology Developer Relationship Lead EMEA
https://www.linkedin.com/in/jochenpapenbrock/Don't miss the NVIDIA workshop at Zurich's IEEE Swiss Conference on Data Science. On May 30, our experts Dora Csillag and Maycon da Silva Carvalho will share the latest on generative AI and LLM customization. Register now.
https://media.licdn.com/dms/image/sync/D4E10AQF1IVhuQ2wL7g/image-shrink_1280/0/1716045778902?e=1716696000&v=beta&t=iOEQCWo2Hzb_pp-qcjGfoJM_276_dOBeO8aRehxOQmw https://www.linkedin.com/feed/update/urn:li:activity:7197358319303360512Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/I keep noticing that the idea of the ‘action principle’ of mechanics is often presented in a way that's overly mysterious concerning the origin of the Lagrangian. It's easy enough to introduce the formalism of variational calculus in one independent variable to undergraduate physics students, and this is where they end up memorizing the incantation for the particle Lagrangian. By the end of their undergraduate physics education, they've probably learned to operate on fields defined over several independent variables and seen how Maxwell's equations (in a vacuum) can be derived using the electromagnetic field Lagrangian. If they were specialized in mathematical physics, they would have also proceeded to quantize the theory over Minkowski space-time and derived the photon. Either way, the electromagnetic Lagrangian was probably at best hand waved in terms of a sea of harmonic oscillators covering space-time, but this can be better motivated by an analysis of an ideal LC circuit which passes energy back and forth between electric and magnetic modes of storage.
In deriving the Euler-Lagrange (EL) equation we integrate by parts which is very significant indeed, but I don't remember ever seeing anyone besides maybe Feynman point out how critical this step is. It's really the slight of hand performed in plain sight while the audience's attention is misdirected on obtaining the EL equation. Integration by parts is made to seem trivial because the applied variation vanishes at the boundary, the compact domain where the information is snapped into agreement with the integral surface picked out by the boundary conditions! So we got rid of the derivative of the variation and the integrand is now a multiple of the variation. If this is zero for every variation, the EL must be satisfied.
Just back up a step: we had an integral which looks like an inner product operator on some L²
https://media.licdn.com/dms/image/D5622AQHNugeeYOK64g/feedshare-shrink_800/0/1715966258097?e=1718841600&v=beta&t=W1RLObBk4PKRXOELjEBbvzZYvJXRyx2oxk6_iHuBkhM https://www.linkedin.com/feed/update/urn:li:activity:7197270271592927233emlyon business school | Associate Professor of Finance and Data Science
https://www.linkedin.com/in/guillaumecoqueret/ New Research! "Missing Values Handling for Machine Learning Portfolios"...overlooked but key ideas... keep reading!
The paper investigates the handling of missing values in machine learning-based portfolio construction using 159 cross-sectional return predictors.
Sum up of the main findings
The study finds that simple cross-sectional mean imputation performs comparably to more sophisticated methods like expectation-maximization (EM) due to the nature of the missing data, which often occurs in large, time-organized blocks with low cross-sectional correlation.
Both mean and EM imputation methods yield similar results in terms of expected returns, with sophisticated imputations sometimes introducing noise that can lead to underperformance if not carefully managed.
Simple mean imputation combined with machine learning models such as neural networks and principal component regressions can deliver high returns, up to 66% per year for equal-weighted portfolios and 39% for value-weighted portfolios.
For practical applications, the paper recommends using simple mean imputation due to its transparency, tractability, and competitive performance compared to more complex methods. These insights are crucial for practitioners aiming to enhance the reliability and performance of machine learning-based trading strategies while efficiently handling missing data.
Overall, the research provides valuable guidance for hedge funds, mutual funds, and proprietary trading firms on the effective use of imputation techniques in portfolio management, highlighting the benefits of simplicity and robustness in predictive modeling.
-----------------------
→ Join 2500+ Finance & AI enthusiasts who receive top new research ideas weekly in their email.: https://lnkd.in/dkxSDJpq
-----------------------
Link to the paper: https://lnkd.in/dXmUnH8U https://media.licdn.com/dms/image/D4D22AQEkosjamUeQZw/feedshare-shrink_2048_1536/0/1715956211908?e=1718841600&v=beta&t=Xwq01dDjoFKbcHiuprGkK1aErcWHtL2HmGE7vA-FU9s https://www.linkedin.com/feed/update/urn:li:activity:7197362607257714688Delphia | Head of Quantitative Research
https://www.linkedin.com/in/vivek-viswanathan-phd/You don't need to come from a math or computer science background to become a successful quant. Vivek Viswanathan, Quant PM at BTG Pactual, shares how his economics and finance background gives him an edge in the industry. Watch the full interview at https://to.dbn.to/3QML2gW
https://media.licdn.com/dms/image/D5610AQGW_rzgQX7jQQ/ads-video-thumbnail_720_1280/0/1715936522577?e=1716613200&v=beta&t=GmGYgxVbh2C0HZ3UJix407rOah5FCR7GN0T2Mx6HM-w https://www.linkedin.com/feed/update/urn:li:activity:7197619757745881088Cuemacro | Founder
https://www.linkedin.com/in/saeedamen/Excited to announce the next Thalesians Ltd in person London talk will be on Wed 29 May at 6:30pm at G-Research's office. Victor Haghani, founder & CIO of Elm Wealth and a co-founding partner of LTCM will be talking about the new book The Missing Billionaires: Lessons from Bernoulli to von Neumann to Taleb which he co-authored, which was named to The Economist’s Best Books of 2023 List. I'm very much looking forward to Victor's talk!
Thanks very much G-Research for hosting the event! Register to attend for free for the event at the link
https://media.licdn.com/dms/image/sync/D4E27AQEpX2S3F7lARQ/articleshare-shrink_1280_800/0/1716045841062?e=1716667200&v=beta&t=Z5CkIEYKisQH1uST-32BFEFkJZIunW_UQX2OXe6Yhyg https://www.linkedin.com/feed/update/urn:li:activity:7197566840426885120RavenPack | Chief Data Scientist, Partner
https://www.linkedin.com/in/peteragerhafez/Very proud of everyone involved in this great journey, our 1 st anniversary
https://www.linkedin.com/feed/update/urn:li:activity:7197280481879412736Truist Securities | Head of Data and Quantamental Research
https://www.linkedin.com/in/jkregenstein/NYC, next Tuesday Jonathan Regenstein and I are talking AI, LLMs and how Snowflake Cortex makes incorporating GenAI reasoning into you analytics incredibly simple. Were partnering with our colleagues Liam Hynes and Henry Chiang at S&P Global to showcase the amazing work they are doing in this space.
It's gonna be fun! (but I think everything with data and AI is fun ) https://media.licdn.com/dms/image/D4E22AQEEsyNAzbHbBw/feedshare-shrink_800/0/1715698823854?e=1718841600&v=beta&t=VcXndP62qL-DrWOKcKrDG4Dpu1Ld39GxgOwTqI-ETFA https://www.linkedin.com/feed/update/urn:li:activity:7197584187644592128NVIDIA | Financial Services and Technology Developer Relationship Lead EMEA
https://www.linkedin.com/in/jochenpapenbrock/Pandas code is now 50x faster on Google #colab with zero code changes!
Used by 10 million users monthly, #googlecolab now comes with native integration for RAPIDS cuDF, which powers GPU acceleration for pandas.
To boost your #pandas code, use this single command at the top of your #NVIDIA GPU-enabled Colab notebook:
%load_ext cudf.pandas
This feature can even accelerate the pandas code/queries generated by Colab AI, #ChatGPT or any #LLM-based chatbot, without needing to learn new paradigm.
Learn more about this feature with the resources in the comments below:
https://media.licdn.com/dms/image/D4E05AQGsh4uKG7_2QQ/videocover-low/0/1715836701725?e=1716667200&v=beta&t=CJ1RQvS_XNQ5AbH6otFGOFjjBobuA4hNFrIh3OgVasM https://www.linkedin.com/feed/update/urn:li:activity:7197226082570223616Truist Securities | Head of Data and Quantamental Research
https://www.linkedin.com/in/jkregenstein/I fine-tuned my first Vision-Language Model
(let me show you how)
PaliGemma is an open-source Large Multimodal Model (LMM) released by Google last week. You can use it for Visual Question Answering (VQA), object detection, or image segmentation.
In this guide, we will walk through fine-tuning PaliGemma to detect bone fractures in X-ray images. 🩻
PaliGemma 3B is available in three different versions, differing in input image resolution (224, 448, and 896) and input text sequence length (128, 512, and 512 tokens, respectively).
To limit GPU memory consumption and enable fine-tuning in Google Colab, we will use the smallest version, paligemma-3b-pt-224, in this tutorial. You will need a GPU runtime with at least 12GB of available RAM, and Google Colab with an NVIDIA T4 is sufficient.
⮑ blog post: https://lnkd.in/d9n9cKeS
Links to the dataset and code are in the comments below.
#opensource #llm #objectdetection #generativeai https://media.licdn.com/dms/image/D4D22AQGSXdGRn1Sbjg/feedshare-shrink_800/0/1715946864318?e=1718841600&v=beta&t=R3LYAwgrK_1nVf4rFm2ccHN31FiY9u9ZwJFAL9n3F1U https://www.linkedin.com/feed/update/urn:li:activity:7197481437552345088Adelomyia Technologies GmbH | Research Analyst
https://www.linkedin.com/in/sarem-seitz-647732134/How important are certifications as a Data Scientist?
There are many certifications like Coursera, DataCamp, and Udemy but also from organizations like Microsoft and AWS.
In my opinion, they are usually not important.
Sure, especially as a consultant in a company it's nice to have a few certifications in your domain since customers will sometimes check if you have any.
But, to get a job, bragging with 20 certifications is not impressive.
Focus on building things you can showcase instead of jumping to the next certification.
#datascience #machinelearning
https://media.licdn.com/dms/image/D4D22AQF9AkzUTWV8sw/feedshare-shrink_2048_1536/0/1715611225288?e=1718841600&v=beta&t=Nvjm20Wxq-1ER76ewPKzn0xpmEEjT6dvLaPABG6jxSE https://www.linkedin.com/feed/update/urn:li:activity:7197236381713072130Fidelity Investments | AI Asset Management
https://www.linkedin.com/in/igor-halperin-092175a/I'm the first author of Kolmogorov-Arnold Network (KAN). I would like to thank Prof. George Karniadakis for his constructive suggestion in our private communication. I have learned a lot. I accept and appreciate any constructive criticism he made. I have responded to him privately but also want to summarize the facts for the public in case you wonder:
(1) The grid update and grid extension tricks are used in numerical experiments but not included in the tutorial (the tutorial was kept as simple as possible). After these tricks, the error for the 2D Poisson equation goes from 24% to 0.05%. This result is obtained on my laptop CPU in 20 minutes, and can potentially be even refined by including more training data, further grid extension or longer training. I want to thank Prof. Karniadakis for reporting 24% error since that is the typical number a typical user would report, given that we did not make it clear enough how to use grid extension in the PDE context. Our sincere apology for not making necessary tricks clearer. Also, the "sine snapping" is just a fun showcase for physicists. We didn't implement this in the paper; I included this symbolic snapping on a whim in the tutorial, thinking that some people will find it fun, but didn't realize some other people might find it confusing. My apology.
Updated code: https://lnkd.in/ev8zYVKa
(2) Regarding the Navier stokes equation, I would like to extend my sincere thanks to github user chaous, although they got the equation wrong at first. I should have checked the equations more carefully when I approved the the pull request. They have corrected the equations. Again I want to thank Prof. Kardinadakis for catching this. But I want to clarify that we (KAN authors) never report any results on Navier Stokes equation in the paper or in the tutorial. The NSE notebook is a purely spontaneous effort from the community, which I a
https://media.licdn.com/dms/image/sync/D4E27AQGZi_7msypkmw/articleshare-shrink_800/0/1715920867179?e=1716613200&v=beta&t=5ArbwzYrQCvqSVVm2YHC1Fi2l2DJ7HXlBXW9bpOZ3JA https://www.linkedin.com/feed/update/urn:li:activity:7197602505109684225Cuemacro | Founder
https://www.linkedin.com/in/saeedamen/I always keep an eye for women I see in meetings - there are still not enough senior women in finance - and I give and receive a look that I think says “I see you”. It was really special to “see” women leaders from our most valued clients at the recent BlackRock Female Leadership Forum. Congratulations Jackie Torres and team for a superb coming together of the minds, I am looking forward to next year’s FLF already!
At the event Larry Fink talked about significant opportunities around AI #datacentre and #infrastructure build out, and the race is on. A review of available research points to data centres’ share of total US power demand 2X or more by the end of the decade (chart). Can power grid grow this quickly? The constraint is real. I find it mind boggling that AI, thought to be the mega force that alleviates constraints and boosts productivity over time, could turn out to be first and foremost inflationary through energy, copper and materials demand. We are doing a lot of work on sequencing of AI macro impact and sizing the quantum of #capex spend - stay tuned.
https://media.licdn.com/dms/image/D4E22AQGHjJiKOWrqlQ/feedshare-shrink_800/0/1716025814728?e=1718841600&v=beta&t=FfcnbS4ITL8VGPEYsLRj9vjvUAIFR_-rJJ_UU-I3IZA https://www.linkedin.com/feed/update/urn:li:activity:7197545152461045760Delphia | Head of Quantitative Research
https://www.linkedin.com/in/vivek-viswanathan-phd/For the first time since the beginning of the full scale war, Ukrainian army has enough artillery shells.
According to the president of Ukraine: "For the first time in the years of the war, none of the brigades complain that there is no artillery projectile."
Not coincidentally, in the past weeks, russia has suffered some of the heaviest man and machine losses since the beginning of the war.
https://media.licdn.com/dms/image/D4E22AQELizdJSmeevw/feedshare-shrink_800/0/1716025258057?e=1718841600&v=beta&t=fJyoLr2OLdEPc1uSIDrm1SfuObRiIdi7P1_-0_Co27A