35 skills found · Page 1 of 2
microsoft / MattersimMatterSim: A deep learning atomistic model across elements, temperatures and pressures.
materialyzeai / MamlPython for Materials Machine Learning, Materials Descriptors, Machine Learning Force Fields, Deep Learning, etc.
bytedance / BambooBAMBOO (Bytedance AI Molecular BOOster) is an AI-driven machine learning force field designed for precise and efficient electrolyte simulations.
thorben-frank / MlffBuild neural networks for machine learning force fields with JAX
zmyybc / AlphaNetA Local Frame-based Atomistic Potential
ACEsuit / Mace TutorialsCollection of tutorials to use the MACE machine learning force field.
SAITPublic / MLFF FrameworkThis repository is a package to provide SAIT Machine Learning Force Field(MLFF) Framework
Aryia-Behroziuan / Other SourcesAsada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). "Cognitive developmental robotics: a survey". IEEE Transactions on Autonomous Mental Development. 1 (1): 12–34. doi:10.1109/tamd.2009.2021702. S2CID 10168773. "ACM Computing Classification System: Artificial intelligence". ACM. 1998. Archived from the original on 12 October 2007. Retrieved 30 August 2007. Goodman, Joanna (2016). Robots in Law: How Artificial Intelligence is Transforming Legal Services (1st ed.). Ark Group. ISBN 978-1-78358-264-8. Archived from the original on 8 November 2016. Retrieved 7 November 2016. Albus, J. S. (2002). "4-D/RCS: A Reference Model Architecture for Intelligent Unmanned Ground Vehicles" (PDF). In Gerhart, G.; Gunderson, R.; Shoemaker, C. (eds.). Proceedings of the SPIE AeroSense Session on Unmanned Ground Vehicle Technology. Unmanned Ground Vehicle Technology IV. 3693. pp. 11–20. Bibcode:2002SPIE.4715..303A. CiteSeerX 10.1.1.15.14. doi:10.1117/12.474462. S2CID 63339739. Archived from the original (PDF) on 25 July 2004. Aleksander, Igor (1995). Artificial Neuroconsciousness: An Update. IWANN. Archived from the original on 2 March 1997. BibTex Archived 2 March 1997 at the Wayback Machine. Bach, Joscha (2008). "Seven Principles of Synthetic Intelligence". In Wang, Pei; Goertzel, Ben; Franklin, Stan (eds.). Artificial General Intelligence, 2008: Proceedings of the First AGI Conference. IOS Press. pp. 63–74. ISBN 978-1-58603-833-5. Archived from the original on 8 July 2016. Retrieved 16 February 2016. "Robots could demand legal rights". BBC News. 21 December 2006. Archived from the original on 15 October 2019. Retrieved 3 February 2011. Brooks, Rodney (1990). "Elephants Don't Play Chess" (PDF). Robotics and Autonomous Systems. 6 (1–2): 3–15. CiteSeerX 10.1.1.588.7539. doi:10.1016/S0921-8890(05)80025-9. Archived (PDF) from the original on 9 August 2007. Brooks, R. A. (1991). "How to build complete creatures rather than isolated cognitive simulators". In VanLehn, K. (ed.). Architectures for Intelligence. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 225–239. CiteSeerX 10.1.1.52.9510. Buchanan, Bruce G. (2005). "A (Very) Brief History of Artificial Intelligence" (PDF). AI Magazine: 53–60. Archived from the original (PDF) on 26 September 2007. Butler, Samuel (13 June 1863). "Darwin among the Machines". Letters to the Editor. The Press. Christchurch, New Zealand. Archived from the original on 19 September 2008. Retrieved 16 October 2014 – via Victoria University of Wellington. Clark, Jack (8 December 2015). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg News. Archived from the original on 23 November 2016. Retrieved 23 November 2016. After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever. "AI set to exceed human brain power". CNN. 26 July 2006. Archived from the original on 19 February 2008. Dennett, Daniel (1991). Consciousness Explained. The Penguin Press. ISBN 978-0-7139-9037-9. Domingos, Pedro (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0-465-06192-1. Dowe, D. L.; Hajek, A. R. (1997). "A computational extension to the Turing Test". Proceedings of the 4th Conference of the Australasian Cognitive Science Society. Archived from the original on 28 June 2011. Dreyfus, Hubert (1972). What Computers Can't Do. New York: MIT Press. ISBN 978-0-06-011082-6. Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN 978-0-02-908060-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Dreyfus, Hubert (1992). What Computers Still Can't Do. New York: MIT Press. ISBN 978-0-262-54067-4. Dyson, George (1998). Darwin among the Machines. Allan Lane Science. ISBN 978-0-7382-0030-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Edelman, Gerald (23 November 2007). "Gerald Edelman – Neural Darwinism and Brain-based Devices". Talking Robots. Archived from the original on 8 October 2009. Edelson, Edward (1991). The Nervous System. New York: Chelsea House. ISBN 978-0-7910-0464-7. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Fearn, Nicholas (2007). The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers. New York: Grove Press. ISBN 978-0-8021-1839-4. Gladwell, Malcolm (2005). Blink. New York: Little, Brown and Co. ISBN 978-0-316-17232-5. Gödel, Kurt (1951). Some basic theorems on the foundations of mathematics and their implications. Gibbs Lecture. In Feferman, Solomon, ed. (1995). Kurt Gödel: Collected Works, Vol. III: Unpublished Essays and Lectures. Oxford University Press. pp. 304–23. ISBN 978-0-19-514722-3. Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press. ISBN 978-0-262-08153-5. Hawkins, Jeff; Blakeslee, Sandra (2005). On Intelligence. New York, NY: Owl Books. ISBN 978-0-8050-7853-4. Henderson, Mark (24 April 2007). "Human rights for robots? We're getting carried away". The Times Online. London. Archived from the original on 31 May 2014. Retrieved 31 May 2014. Hernandez-Orallo, Jose (2000). "Beyond the Turing Test". Journal of Logic, Language and Information. 9 (4): 447–466. doi:10.1023/A:1008367325700. S2CID 14481982. Hernandez-Orallo, J.; Dowe, D. L. (2010). "Measuring Universal Intelligence: Towards an Anytime Intelligence Test". Artificial Intelligence. 174 (18): 1508–1539. CiteSeerX 10.1.1.295.9079. doi:10.1016/j.artint.2010.09.006. Hinton, G. E. (2007). "Learning multiple layers of representation". Trends in Cognitive Sciences. 11 (10): 428–434. doi:10.1016/j.tics.2007.09.004. PMID 17921042. S2CID 15066318. Hofstadter, Douglas (1979). Gödel, Escher, Bach: an Eternal Golden Braid. New York, NY: Vintage Books. ISBN 978-0-394-74502-2. Holland, John H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. ISBN 978-0-262-58111-0. Archived from the original on 26 July 2020. Retrieved 17 December 2019. Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University: a Perspective". Archived from the original on 15 May 2007. Retrieved 30 August 2007. Hutter, M. (2012). "One Decade of Universal Artificial Intelligence". Theoretical Foundations of Artificial General Intelligence. Atlantis Thinking Machines. 4. pp. 67–88. CiteSeerX 10.1.1.228.8725. doi:10.2991/978-94-91216-62-6_5. ISBN 978-94-91216-61-9. S2CID 8888091. Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). Judgment under uncertainty: Heuristics and biases. Science. 185. New York: Cambridge University Press. pp. 1124–31. doi:10.1126/science.185.4157.1124. ISBN 978-0-521-28414-1. PMID 17835457. S2CID 143452957. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. Katz, Yarden (1 November 2012). "Noam Chomsky on Where Artificial Intelligence Went Wrong". The Atlantic. Archived from the original on 28 February 2019. Retrieved 26 October 2014. "Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived from the original on 17 October 2014. Retrieved 25 October 2014. Koza, John R. (1992). Genetic Programming (On the Programming of Computers by Means of Natural Selection). MIT Press. Bibcode:1992gppc.book.....K. ISBN 978-0-262-11170-6. Kolata, G. (1982). "How can computers get common sense?". Science. 217 (4566): 1237–1238. Bibcode:1982Sci...217.1237K. doi:10.1126/science.217.4566.1237. PMID 17837639. Kumar, Gulshan; Kumar, Krishan (2012). "The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review". Applied Computational Intelligence and Soft Computing. 2012: 1–20. doi:10.1155/2012/850160. Kurzweil, Ray (1999). The Age of Spiritual Machines. Penguin Books. ISBN 978-0-670-88217-5. Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3. Lakoff, George; Núñez, Rafael E. (2000). Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being. Basic Books. ISBN 978-0-465-03771-1. Langley, Pat (2011). "The changing science of machine learning". Machine Learning. 82 (3): 275–279. doi:10.1007/s10994-011-5242-y. Law, Diane (June 1994). Searle, Subsymbolic Functionalism and Synthetic Intelligence (Technical report). University of Texas at Austin. p. AI94-222. CiteSeerX 10.1.1.38.8384. Legg, Shane; Hutter, Marcus (15 June 2007). A Collection of Definitions of Intelligence (Technical report). IDSIA. arXiv:0706.3639. Bibcode:2007arXiv0706.3639L. 07-07. Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems. Addison-Wesley. ISBN 978-0-201-51752-1. Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council. Lucas, John (1961). "Minds, Machines and Gödel". In Anderson, A.R. (ed.). Minds and Machines. Archived from the original on 19 August 2007. Retrieved 30 August 2007. Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "Developmental robotics: a survey". Connection Science. 15 (4): 151–190. CiteSeerX 10.1.1.83.7615. doi:10.1080/09540090310001655110. S2CID 1452734. Maker, Meg Houston (2006). "AI@50: AI Past, Present, Future". Dartmouth College. Archived from the original on 3 January 2007. Retrieved 16 October 2008. Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not". The New York Times. Archived from the original on 22 October 2014. Retrieved 25 October 2014. McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955). "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence". Archived from the original on 26 August 2007. Retrieved 30 August 2007.. McCarthy, John; Hayes, P. J. (1969). "Some philosophical problems from the standpoint of artificial intelligence". Machine Intelligence. 4: 463–502. CiteSeerX 10.1.1.85.5082. Archived from the original on 10 August 2007. Retrieved 30 August 2007. McCarthy, John (12 November 2007). "What Is Artificial Intelligence?". Archived from the original on 18 November 2015. Minsky, Marvin (1967). Computation: Finite and Infinite Machines. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 978-0-13-165449-5. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Minsky, Marvin (2006). The Emotion Machine. New York, NY: Simon & Schusterl. ISBN 978-0-7432-7663-4. Moravec, Hans (1988). Mind Children. Harvard University Press. ISBN 978-0-674-57616-2. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Norvig, Peter (25 June 2012). "On Chomsky and the Two Cultures of Statistical Learning". Peter Norvig. Archived from the original on 19 October 2014. NRC (United States National Research Council) (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. Needham, Joseph (1986). Science and Civilization in China: Volume 2. Caves Books Ltd. Newell, Allen; Simon, H. A. (1976). "Computer Science as Empirical Inquiry: Symbols and Search". Communications of the ACM. 19 (3): 113–126. doi:10.1145/360018.360022.. Nilsson, Nils (1983). "Artificial Intelligence Prepares for 2001" (PDF). AI Magazine. 1 (1). Archived (PDF) from the original on 17 August 2020. Retrieved 22 August 2020. Presidential Address to the Association for the Advancement of Artificial Intelligence. O'Brien, James; Marakas, George (2011). Management Information Systems (10th ed.). McGraw-Hill/Irwin. ISBN 978-0-07-337681-3. O'Connor, Kathleen Malone (1994). "The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam". University of Pennsylvania: 1–435. Archived from the original on 5 December 2019. Retrieved 27 August 2008. Oudeyer, P-Y. (2010). "On the impact of robotics in behavioral and cognitive sciences: from insect navigation to human cognitive development" (PDF). IEEE Transactions on Autonomous Mental Development. 2 (1): 2–16. doi:10.1109/tamd.2009.2039057. S2CID 6362217. Archived (PDF) from the original on 3 October 2018. Retrieved 4 June 2013. Penrose, Roger (1989). The Emperor's New Mind: Concerning Computer, Minds and The Laws of Physics. Oxford University Press. ISBN 978-0-19-851973-7. Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com. ISBN 978-1-4092-0073-4. Archived from the original on 8 August 2015. Retrieved 21 April 2008 – via gp-field-guide.org.uk. Rajani, Sandeep (2011). "Artificial Intelligence – Man or Machine" (PDF). International Journal of Information Technology and Knowledge Management. 4 (1): 173–176. Archived from the original (PDF) on 18 January 2013. Ronald, E. M. A. and Sipper, M. Intelligence is not enough: On the socialization of talking machines, Minds and Machines Archived 25 July 2020 at the Wayback Machine, vol. 11, no. 4, pp. 567–576, November 2001. Ronald, E. M. A. and Sipper, M. What use is a Turing chatterbox? Archived 25 July 2020 at the Wayback Machine, Communications of the ACM, vol. 43, no. 10, pp. 21–23, October 2000. "Science". August 1982. Archived from the original on 25 July 2020. Retrieved 16 February 2016. Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. Archived (PDF) from the original on 17 March 2019. Retrieved 22 August 2020. Searle, John (1999). Mind, language and society. New York, NY: Basic Books. ISBN 978-0-465-04521-1. OCLC 231867665. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Shapiro, Stuart C. (1992). "Artificial Intelligence". In Shapiro, Stuart C. (ed.). Encyclopedia of Artificial Intelligence (PDF) (2nd ed.). New York: John Wiley. pp. 54–57. ISBN 978-0-471-50306-4. Archived (PDF) from the original on 1 February 2016. Retrieved 29 May 2009. Simon, H. A. (1965). The Shape of Automation for Men and Management. New York: Harper & Row. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Skillings, Jonathan (3 July 2006). "Getting Machines to Think Like Us". cnet. Archived from the original on 16 November 2011. Retrieved 3 February 2011. Solomonoff, Ray (1956). An Inductive Inference Machine (PDF). Dartmouth Summer Research Conference on Artificial Intelligence. Archived (PDF) from the original on 26 April 2011. Retrieved 22 March 2011 – via std.com, pdf scanned copy of the original. Later published as Solomonoff, Ray (1957). "An Inductive Inference Machine". IRE Convention Record. Section on Information Theory, part 2. pp. 56–62. Tao, Jianhua; Tan, Tieniu (2005). Affective Computing and Intelligent Interaction. Affective Computing: A Review. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548. Tecuci, Gheorghe (March–April 2012). "Artificial Intelligence". Wiley Interdisciplinary Reviews: Computational Statistics. 4 (2): 168–180. doi:10.1002/wics.200. Thro, Ellen (1993). Robotics: The Marriage of Computers and Machines. New York: Facts on File. ISBN 978-0-8160-2628-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423. van der Walt, Christiaan; Bernard, Etienne (2006). "Data characteristics that determine classifier performance" (PDF). Archived from the original (PDF) on 25 March 2009. Retrieved 5 August 2009. Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era". Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11. Bibcode:1993vise.nasa...11V. Archived from the original on 1 January 2007. Retrieved 14 November 2011. Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Archived from the original on 26 July 2020. Retrieved 18 November 2019. Weizenbaum, Joseph (1976). Computer Power and Human Reason. San Francisco: W.H. Freeman & Company. ISBN 978-0-7167-0464-5. Weng, J.; McClelland; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; Thelen, E. (2001). "Autonomous mental development by robots and animals" (PDF). Science. 291 (5504): 599–600. doi:10.1126/science.291.5504.599. PMID 11229402. S2CID 54131797. Archived (PDF) from the original on 4 September 2013. Retrieved 4 June 2013 – via msu.edu. "Applications of AI". www-formal.stanford.edu. Archived from the original on 28 August 2016. Retrieved 25 September 2016. Further reading DH Author, 'Why Are There Still So Many Jobs? The History and Future of Workplace Automation' (2015) 29(3) Journal of Economic Perspectives 3. Boden, Margaret, Mind As Machine, Oxford University Press, 2006. Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93. Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65. Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press. Koch, Christof, "Proust among the Machines", Scientific American, vol. 321, no. 6 (December 2019), pp. 46–49. Christof Koch doubts the possibility of "intelligent" machines attaining consciousness, because "[e]ven the most sophisticated brain simulations are unlikely to produce conscious feelings." (p. 48.) According to Koch, "Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to... humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects... with a point of view.... Once computers' cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible – the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself." (p. 49.) Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable disambiguation. An example is the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence refers. (p. 61.) E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Archived 24 May 2018 at the Wayback Machine. George Musser, "Artificial Imagination: How machines could learn creativity and common sense, among other human qualities", Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63. Myers, Courtney Boyd ed. (2009). "The AI Report" Archived 29 July 2017 at the Wayback Machine. Forbes June 2009 Raphael, Bertram (1976). The Thinking Computer. W.H.Freeman and Company. ISBN 978-0-7167-0723-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020. Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) Serenko, Alexander (2010). "The development of an AI journal ranking based on the revealed preference approach" (PDF). Journal of Informetrics. 4 (4): 447–459. doi:10.1016/j.joi.2010.04.001. Archived (PDF) from the original on 4 October 2013. Retrieved 24 August 2013. Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–649. doi:10.1016/j.joi.2011.06.002. Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013. Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994. Tom Simonite (29 December 2014). "2014 in Computing: Breakthroughs in Artificial Intelligence". MIT Technology Review. Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.)
DuktigYajie / SOG NetSum-of-Gaussians Neural Network (SOG-Net) is a lightweight and versatile framework for integrating long-range interactions into machine learning force field.
gabkrenzer / VaspMDA collection of tips, scripts, tools and files to improve your workflow, or simply help you start with ab initio Molecular Dynamics (AIMD) and Machine Learning Force Field Moelcular Dynamics (MLFF-MD) calculations using VASP, and post-processing tools. Scripts and files are designed to be easily modified to suit your own objectives.
rk-lindsey / Chimes CalculatorTools to interface ChIMES with various external codes.
ClickFF / MAPLEMachine-learning force-field (MLFF)–native molecular modeling platform
WMD-group / MLFFA collection of files related to machine learning force fields
popelier-group / IchorComputational Chemistry Data Management Library for Machine Learning Force Field Development
Parity-LRX / FSCTEPA Multi-Operator Equivariant Framework for High-Performance Machine Learning Force Fields, supporting External Fields embedding and Physical Tensors prediction.
ASK-Berkeley / StABlE Training[TMLR 2025] Stability-Aware Training of Machine Learning Force Fields with Differentiable Boltzmann Estimators
SOYJUN / Implement ODR ProtocolOverview For this assignment you will be developing and implementing : An On-Demand shortest-hop Routing (ODR) protocol for networks of fixed but arbitrary and unknown connectivity, using PF_PACKET sockets. The implementation is based on (a simplified version of) the AODV algorithm. Time client and server applications that send requests and replies to each other across the network using ODR. An API you will implement using Unix domain datagram sockets enables applications to communicate with the ODR mechanism running locally at their nodes. I shall be discussing the assignment in class on Wednesday, October 29, and Monday, November 3. The following should prove useful reference material for the assignment : Sections 15.1, 15.2, 15.4 & 15.6, Chapter 15, on Unix domain datagram sockets. PF_PACKET(7) from the Linux manual pages. You might find these notes made by a past CSE 533 student useful. Also, the following link http://www.pdbuchan.com/rawsock/rawsock.html contains useful code samples that use PF_PACKET sockets (as well as other code samples that use raw IP sockets which you do not need for this assignment, though you will be using these types of sockets for Assignment 4). Charles E. Perkins & Elizabeth M. Royer. “Ad-hoc On-Demand Distance Vector Routing.” Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, New Orleans, Louisiana, February 1999, pp. 90 - 100. The VMware environment minix.cs.stonybrook.edu is a Linux box running VMware. A cluster of ten Linux virtual machines, called vm1 through vm10, on which you can gain access as root and run your code have been created on minix. See VMware Environment Hosts for further details. VMware instructions takes you to a page that explains how to use the system. The ten virtual machines have been configured into a small virtual intranet of Ethernet LANs whose topology is (in principle) unknown to you. There is a course account cse533 on node minix, with home directory /users/cse533. In there, you will find a subdirectory Stevens/unpv13e , exactly as you are used to having on the cs system. You should develop your source code and makefiles for handing in accordingly. You will be handing in your source code on the minix node. Note that you do not need to link against the socket library (-lsocket) in Linux. The same is true for -lnsl and -lresolv. For example, take a look at how the LIBS variable is defined for Solaris, in /home/courses/cse533/Stevens/unpv13e_solaris2.10/Make.defines (on compserv1, say) : LIBS = ../libunp.a -lresolv -lsocket -lnsl -lpthread But if you take a look at Make.defines on minix (/users/cse533/Stevens/unpv13e/Make.defines) you will find only: LIBS = ../libunp.a -lpthread The nodes vm1 , . . . . . , vm10 are all multihomed : each has two (or more) interfaces. The interface ‘eth0 ’ should be completely ignored and is not to be used for this assignment (because it shows all ten nodes as if belonging to the same single Ethernet 192.168.1.0/24, rather than to an intranet composed of several Ethernets). Note that vm1 , . . . . . , vm10 are virtual machines, not real ones. One implication of this is that you will not be able to find out what their (virtual) IP addresses are by using nslookup and such. To find out these IP addresses, you need to look at the file /etc/hosts on minix. More to the point, invoking gethostbyname for a given vm will return to you only the (primary) IP address associated with the interface eth0 of that vm (which is the interface you will not be using). It will not return to you any other IP address for the node. Similarly, gethostbyaddr will return the vm node name only if you give it the (primary) IP address associated with the interface eth0 for the node. It will return nothing if you give it any other IP address for the node, even though the address is perfectly valid. Because of this, and because it will ease your task to be able to use gethostbyname and gethostbyaddr in a straightforward way, we shall adopt the (primary) IP addresses associated with interfaces eth0 as the ‘canonical’ IP addresses for the nodes (more on this below). Time client and server A time server runs on each of the ten vm machines. The client code should also be available on each vm so that it can be evoked at any of them. Normally, time clients/servers exchange request/reply messages using the TCP/UDP socket API that, effectively, enables them to receive service (indirectly, via the transport layer) from the local IP mechanism running at their nodes. You are to implement an API using Unix domain sockets to access the local ODR service directly (somewhat similar, in effect, to the way that raw sockets permit an application to access IP directly). Use Unix domain SOCK_DGRAM, rather than SOCK_STREAM, sockets (see Figures 15.5 & 15.6, pp. 418 - 419). API You need to implement a msg_send function that will be called by clients/servers to send requests/replies. The parameters of the function consist of : int giving the socket descriptor for write char* giving the ‘canonical’ IP address for the destination node, in presentation format int giving the destination ‘port’ number char* giving message to be sent int flag if set, force a route rediscovery to the destination node even if a non-‘stale’ route already exists (see below) msg_send will format these parameters into a single char sequence which is written to the Unix domain socket that a client/server process creates. The sequence will be read by the local ODR from a Unix domain socket that the ODR process creates for itself. Recall that the ‘canonical’ IP address for a vm node is the (primary) IP address associated with the eth0 interface for the node. It is what will be returned to you by a call to gethostbyname. Similarly, we need a msg_recv function which will do a (blocking) read on the application domain socket and return with : int giving socket descriptor for read char* giving message received char* giving ‘canonical’ IP address for the source node of message, in presentation format int* giving source ‘port’ number This information is written as a single char sequence by the ODR process to the domain socket that it creates for itself. It is read by msg_recv from the domain socket the client/server process creates, decomposed into the three components above, and returned to the caller of msg_recv. Also see the section below entitled ODR and the API. Client When a client is evoked at a node, it creates a domain datagram socket. The client should bind its socket to a ‘temporary’ (i.e., not ‘well-known’) sun_path name obtained from a call to tmpnam() (cf. line 10, Figure 15.6, p. 419) so that multiple clients may run at the same node. Note that tmpnam() is actually highly deprecated. You should use the mkstemp() function instead - look up the online man pages on minix (‘man mkstemp’) for details. As you run client code again and again during the development stage, the temporary files created by the calls to tmpnam / mkstemp start to proliferate since these files are not automatically removed when the client code terminates. You need to explicitly remove the file created by the client evocation by issuing a call to unlink() or to remove() in your client code just before the client code exits. See the online man pages on minix (‘man unlink’, ‘man remove’) for details. The client then enters an infinite loop repeating the steps below. The client prompts the user to choose one of vm1 , . . . . . , vm10 as a server node. Client msg_sends a 1 or 2 byte message to server and prints out on stdout the message client at node vm i1 sending request to server at vm i2 (In general, throughout this assignment, “trace” messages such as the one above should give the vm names and not IP addresses of the nodes.) Client then blocks in msg_recv awaiting response. This attempt to read from the domain socket should be backed up by a timeout in case no response ever comes. I leave it up to you whether you ‘wrap’ the call to msg_recv in a timeout, or you implement the timeout inside msg_recv itself. When the client receives a response it prints out on stdout the message client at node vm i1 : received from vm i2 <timestamp> If, on the other hand, the client times out, it should print out the message client at node vm i1 : timeout on response from vm i2 The client then retransmits the message out, setting the flag parameter in msg_send to force a route rediscovery, and prints out an appropriate message on stdout. This is done only once, when a timeout for a given message to the server occurs for the first time. Client repeats steps 1. - 3. Server The server creates a domain datagram socket. The server socket is assumed to have a (node-local) ‘well-known’ sun_path name which it binds to. This ‘well-known’ sun_path name is designated by a (network-wide) ‘well-known’ ‘port’ value. The time client uses this ‘port’ value to communicate with the server. The server enters an infinite sequence of calls to msg_recv followed by msg_send, awaiting client requests and responding to them. When it responds to a client request, it prints out on stdout the message server at node vm i1 responding to request from vm i2 ODR The ODR process runs on each of the ten vm machines. It is evoked with a single command line argument which gives a “staleness” time parameter, in seconds. It uses get_hw_addrs (available to you on minix in ~cse533/Asgn3_code) to obtain the index, and associated (unicast) IP and Ethernet addresses for each of the node’s interfaces, except for the eth0 and lo (loopback) interfaces, which should be ignored. In the subdirectory ~cse533/Asgn3_code (/users/cse533/Asgn3_code) on minix I am providing you with two functions, get_hw_addrs and prhwaddrs. These are analogous to the get_ifi_info_plus and prifinfo_plus of Assignment 2. Like get_ifi_info_plus, get_hw_addrs uses ioctl. get_hw_addrs gets the (primary) IP address, alias IP addresses (if any), HW address, and interface name and index value for each of the node's interfaces (including the loopback interface lo). prhwaddrs prints that information out. You should modify and use these functions as needed. Note that if an interface has no HW address associated with it (this is, typically, the case for the loopback interface lo for example), then ioctl returns get_hw_addrs a HW address which is the equivalent of 00:00:00:00:00:00 . get_hw_addrs stores this in the appropriate field of its data structures as it would with any HW address returned by ioctl, but when prhwaddrs comes across such an address, it prints a blank line instead of its usual ‘HWaddr = xx:xx:xx:xx:xx:xx’. The ODR process creates one or more PF_PACKET sockets. You will need to try out PF_PACKET sockets for yourselves and familiarize yourselves with how they behave. If, when you read from the socket and provide a sockaddr_ll structure, the kernel returns to you the index of the interface on which the incoming frame was received, then one socket will be enough. Otherwise, somewhat in the manner of Assignment 2, you shall have to create a PF_PACKET socket for every interface of interest (which are all the interfaces of the node, excluding interfaces lo and eth0 ), and bind a socket to each interface. Furthermore, if the kernel also returns to you the source Ethernet address of the frame in the sockaddr_ll structure, then you can make do with SOCK_DGRAM type PF_PACKET sockets; otherwise you shall have to use SOCK_RAW type sockets (although I would prefer you to use SOCK_RAW type sockets anyway, even if it turns out you can make do with SOCK_DGRAM type). The socket(s) should have a protocol value (no larger than 0xffff so that it fits in two bytes; this value is given as a network-byte-order parameter in the call(s) to function socket) that identifies your ODR protocol. The <linux/if_ether.h> include file (i.e., the file /usr/include/linux/if_ether.h) contains protocol values defined for the standard protocols typically found on an Ethernet LAN, as well as other values such as ETH_P_ALL. You should set protocol to a value of your choice which is not a <linux/if_ether.h> value, but which is, hopefully, unique to yourself. Remember that you will all be running your code using the same root account on the vm1 , . . . . . , vm10 nodes. So if two of you happen to choose the same protocol value and happen to be running on the same vm node at the same time, your applications will receive each other’s frames. For that reason, try to choose a protocol value for the socket(s) that is likely to be unique to yourself (something based on your Stony Brook student ID number, for example). This value effectively becomes the protocol value for your implementation of ODR, as opposed to some other cse 533 student's implementation. Because your value of protocol is to be carried in the frame type field of the Ethernet frame header, the value chosen should be not less than 1536 (0x600) so that it is not misinterpreted as the length of an Ethernet 802.3 frame. Note from the man pages for packet(7) that frames are passed to and from the socket without any processing in the frame content by the device driver on the other side of the socket, except for calculating and tagging on the 4-byte CRC trailer for outgoing frames, and stripping that trailer before delivering incoming frames to the socket. Nevertheless, if you write a frame that is less than 60 bytes, the necessary padding is automatically added by the device driver so that the frame that is actually transmitted out is the minimum Ethernet size of 64 bytes. When reading from the socket, however, any such padding that was introduced into a short frame at the sending node to bring it up to the minimum frame size is not stripped off - it is included in what you receive from the socket (thus, the minimum number of bytes you receive should never be less than 60). Also, you will have to build the frame header for outgoing frames yourselves (assuming you use SOCK_RAW type sockets). Bear in mind that the field values in that header have to be in network order. The ODR process also creates a domain datagram socket for communication with application processes at the node, and binds the socket to a ‘well known’ sun_path name for the ODR service. Because it is dealing with fixed topologies, ODR is, by and large, considerably simpler than AODV. In particular, discovered routes are relatively stable and there is no need for all the paraphernalia that goes with the possibility of routes changing (such as maintenance of active nodes in the routing tables and timeout mechanisms; timeouts on reverse links; lifetime field in the RREP messages; etc.) Nor will we be implementing source_sequence_#s (in the RREQ messages), and dest_sequence_# (in RREQ and RREP messages). In reality, we should (though we will not, for the sake of simplicity, be doing so) implement some sort of sequence number mechanism, or some alternative mechanism such as split-horizon for example, if we are to avoid possible scenarios of routing loops in a “count to infinity” context (I shall explain this point in class). However, we want ODR to discover shortest-hop paths, and we want it to do so in a reasonably efficient manner. This necessitates having one or two aspects of its operations work in a different, possibly slightly more complicated, way than AODV does. ODR has several basic responsibilities : Build and maintain a routing table. For each destination in the table, the routing table structure should include, at a minimum, the next-hop node (in the form of the Ethernet address for that node) and outgoing interface index, the number of hops to the destination, and a timestamp of when the the routing table entry was made or last “reconfirmed” / updated. Note that a destination node in the table is to be identified only by its ‘canonical’ IP address, and not by any other IP addresses the node has. Generate a RREQ in response to a time client calling msg_send for a destination for which ODR has no route (or for which a route exists, but msg_send has the flag parameter set or the route has gone ‘stale’ – see below), and ‘flood’ the RREQ out on all the node’s interfaces (except for the interface it came in on and, of course, the interfaces eth0 and lo). Flooding is done using an Ethernet broadcast destination address (0xff:ff:ff:ff:ff:ff) in the outgoing frame header. Note that a copy of the broadcast packet is supposed to / might be looped back to the node that sends it (see p. 535 in the Stevens textbook). ODR will have to take care not to treat these copies as new incoming RREQs. Also note that ODR at the client node increments the broadcast_id every time it issues a new RREQ for any destination node. When a RREQ is received, ODR has to generate a RREP if it is at the destination node, or if it is at an intermediate node that happens to have a route (which is not ‘stale’ – see below) to the destination. Otherwise, it must propagate the RREQ by flooding it out on all the node’s interfaces (except the interface the RREQ arrived on). Note that as it processes received RREQs, ODR should enter the ‘reverse’ route back to the source node into its routing table, or update an existing entry back to the source node if the RREQ received shows a shorter-hop route, or a route with the same number of hops but going through a different neighbour. The timestamp associated with the table entry should be updated whenever an existing route is either “reconfirmed” or updated. Obviously, if the node is going to generate a RREP, updating an existing entry back to the source node with a more efficient route, or a same-hops route using a different neighbour, should be done before the RREP is generated. Unlike AODV, when an intermediate node receives a RREQ for which it generates a RREP, it should nevertheless continue to flood the RREQ it received if the RREQ pertains to a source node whose existence it has heretofore been unaware of, or the RREQ gives it a more efficient route than it knew of back to the source node (the reason for continuing to flood the RREQ is so that other nodes in the intranet also become aware of the existence of the source node or of the potentially more optimal reverse route to it, and update their tables accordingly). However, since an RREP for this RREQ is being sent by our node, we do not want other nodes who receive the RREQ propagated by our node, and who might be in a position to do so, to also send RREPs. So we need to introduce a field in the RREQ message, not present in the AODV specifications, which acts like a “RREP already sent” field. Our node sets this field before further propagating the RREQ and nodes receiving an RREQ with this field set do not send RREPs in response, even if they are in a position to do so. ODR may, of course, receive multiple, distinct instances of the same RREQ (the combination of source_addr and broadcast_id uniquely identifies the RREQ). Such RREQs should not be flooded out unless they have a lower hop count than instances of that RREQ that had previously been received. By the same token, if ODR is in a position to send out a RREP, and has already done so for this, now repeating, RREQ , it should not send out another RREP unless the RREQ shows a more efficient, previously unknown, reverse route back to the source node. In other words, ODR should not generate essentially duplicative RREPs, nor generate RREPs to instances of RREQs that reflect reverse routes to the source that are not more efficient than what we already have. Relay RREPs received back to the source node (this is done using the ‘reverse’ route entered into the routing table when the corresponding RREQ was processed). At the same time, a ‘forward’ path to the destination is entered into the routing table. ODR could receive multiple, distinct RREPs for the same RREQ. The ‘forward’ route entered in the routing table should be updated to reflect the shortest-hop route to the destination, and RREPs reflecting suboptimal routes should not be relayed back to the source. In general, maintaining a route and its associated timestamp in the table in response to RREPs received is done in the same manner described above for RREQs. Forward time client/server messages along the next hop. (The following is important – you will lose points if you do not implement it.) Note that such application payload messages (especially if they are the initial request from the client to the server, rather than the server response back to the client) can be like “free” RREPs, enabling nodes along the path from source (client) to destination (server) node to build a reverse path back to the client node whose existence they were heretofore unaware of (or, possibly, to update an existing route with a more optimal one). Before it forwards an application payload message along the next hop, ODR at an intermediate node (and also at the final destination node) should use the message to update its routing table in this way. Thus, calls to msg_send by time servers should never cause ODR at the server node to initiate RREQs, since the receipt of a time client request implies that a route back to the client node should now exist in the routing table. The only exception to this is if the server node has a staleness parameter of zero (see below). A routing table entry has associated with it a timestamp that gives the time the entry was made into the table. When a client at a node calls msg_send, and if an entry for the destination node already exists in the routing table, ODR first checks that the routing information is not ‘stale’. A stale routing table entry is one that is older than the value defined by the staleness parameter given as a command line argument to the ODR process when it is executed. ODR deletes stale entries (as well as non-stale entries when the flag parameter in msg_send is set) and initiates a route rediscovery by issuing a RREQ for the destination node. This will force periodic updating of the routing tables to take care of failed nodes along the current path, Ethernet addresses that might have changed, and so on. Similarly, as RREQs propagate through the intranet, existing stale table entries at intermediate nodes are deleted and new route discoveries propagated. As noted above when discussing the processing of RREQs and RREPs, the associated timestamp for an existing table entry is updated in response to having the route either “reconfirmed” or updated (this applies to both reverse routes, by virtue of RREQs received, and to forward routes, by virtue of RREPs). Finally, note that a staleness parameter of 0 essentially indicates that the discovered route will be used only once, when first discovered, and then discarded. Effectively, an ODR with staleness parameter 0 maintains no real routing table at all ; instead, it forces route discoveries at every step of its operation. As a practical matter, ODR should be run with staleness parameter values that are considerably larger than the longest RTT on the intranet, otherwise performance will degrade considerably (and collapse entirely as the parameter values approach 0). Nevertheless, for robustness, we need to implement a mechanism by which an intermediate node that receives a RREP or application payload message for forwarding and finds that its relevant routing table entry has since gone stale, can intiate a RREQ to rediscover the route it needs. RREQ, RREP, and time client/server request/response messages will all have to be carried as encapsulated ODR protocol messages that form the data payload of Ethernet frames. So we need to design the structure of ODR protocol messages. The format should contain a type field (0 for RREQ, 1 for RREP, 2 for application payload ). The remaining fields in an ODR message will depend on what type it is. The fields needed for (our simplified versions of AODV’s) RREQ and RREP should be fairly clear to you, but keep in mind that you need to introduce two extra fields: The “RREP already sent” bit or field in RREQ messages, as mentioned above. A “forced discovery” bit or field in both RREQ and RREP messages: When a client application forces route rediscovery, this bit should be set in the RREQ issued by the client node ODR. Intermediate nodes that are not the destination node but which do have a route to the destination node should not respond with RREPs to an RREQ which has the forced discovery field set. Instead, they should continue to flood the RREQ so that it eventually reaches the destination node which will then respond with an RREP. The intermediate nodes relaying such an RREQ must update their ‘reverse’ route back to the source node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. The destination node responds to the RREQ with an RREP in which this field is also set. Intermediate nodes that receive such a forced discovery RREP must update their ‘forward’ route to the destination node accordingly, even if the new route is less efficient (i.e., has more hops) than the one they currently have in their routing table. This behaviour will cause a forced discovery RREQ to be responded to only by the destination node itself and not any other node, and will cause intermediate nodes to update their routing tables to both source and destination nodes in accordance with the latest routing information received, to cover the possibility that older routes are no longer valid because nodes and/or links along their paths have gone down. A type 2, application payload, message needs to contain the following type of information : type = 2 ‘canonical’ IP address of source node ‘port’ number of source application process (This, of course, is not a real port number in the TCP/UDP sense, but simply a value that ODR at the source node uses to designate the sun_path name for the source application’s domain socket.) ‘canonical’ IP address of destination node ‘port’ number of destination application process (This is passed to ODR by the application process at the source node when it calls msg_send. Its designates the sun_path name for an application’s domain socket at the destination node.) hop count (This starts at 0 and is incremented by 1 at each hop so that ODR can make use of the message to update its routing table, as discussed above.) number of bytes in application message The fields above essentially constitute a ‘header’ for the ODR message. Note that fields which you choose to have carry numeric values (rather than ascii characters, for example) must be in network byte order. ODR-defined numeric-valued fields in type 0, RREQ, and type 1, RREP, messages must, of course, also be in network byte order. Also note that only the ‘canonical’ IP addresses are used for the source and destination nodes in the ODR header. The same has to be true in the headers for type 0, RREQ, and type 1, RREP, messages. The general rule is that ODR messages only carry ‘canonical’ IP node addresses. The last field in the type 2 ODR message is essentially the data payload of the message. application message given in the call to msg_send An ODR protocol message is encapsulated as the data payload of an Ethernet frame whose header it fills in as follows : source address = Ethernet address of outgoing interface of the current node where ODR is processing the message. destination address = Ethernet broadcast address for type 0 messages; Ethernet address of next hop node for type 1 & 2 messages. protocol field = protocol value for the ODR PF_PACKET socket(s). Last but not least, whenever ODR writes an Ethernet frame out through its socket, it prints out on stdout the message ODR at node vm i1 : sending frame hdr src vm i1 dest addr ODR msg type n src vm i2 dest vm i3 where addr is in presentation format (i.e., hexadecimal xx:xx:xx:xx:xx:xx) and gives the destination Ethernet address in the outgoing frame header. Other nodes in the message should be identified by their vm name. A message should be printed out for each packet sent out on a distinct interface. ODR and the API When the ODR process first starts, it must construct a table in which it enters all well-known ‘port’ numbers and their corresponding sun_path names. These will constitute permanent entries in the table. Thereafter, whenever it reads a message off its domain socket, it must obtain the sun_path name for the peer process socket and check whether that name is entered in the table. If not, it must select an ‘ephemeral’ ‘port’ value by which to designate the peer sun_path name and enter the pair < port value , sun_path name > into the table. Such entries cannot be permanent otherwise the table will grow unboundedly in time, with entries surviving for ever, beyond the peer processes’ demise. We must associate a time_to_live field with a non-permanent table entry, and purge the entry if nothing is heard from the peer for that amount of time. Every time a peer process for which a non-permanent table entry exists communicates with ODR, its time_to_live value should be reinitialized. Note that when ODR writes to a peer, it is possible for the write to fail because the peer does not exist : it could be a ‘well-known’ service that is not running, or we could be in the interval between a process with a non-permanent table entry terminating and the expiration of its time_to_live value. Notes A proper implementation of ODR would probably require that RREQ and RREP messages be backed up by some kind of timeout and retransmission mechanism since the network transmission environment is not reliable. This would considerably complicate the implementation (because at any given moment, a node could have multiple RREQs that it has flooded out, but for which it has still not received RREPs; the situation is further complicated by the fact that not all intermediate nodes receiving and relaying RREQs necessarily lie on a path to the destination, and therefore should expect to receive RREPs), and, learning-wise, would not add much to the experience you should have gained from Assignment 2.
kcl-tscm / MffA Python package for building nonparametric force fields from machine learning
JiaxuanLiu-Arsko / DPmoireA tool for constructing accurate machine learning force fields in moir\'e systems
orlandoacevedo / GAMLGenetic Algorithm Machine Learning (GAML) software package for automated force field parameterization.