Post A0UltV6LNDVM3Hmcc4 by gassahara@mstdn.io
 (DIR) More posts by gassahara@mstdn.io
 (DIR) Post #A0QrOtoQTFJk7x1VNg by amiloradovsky@functional.cafe
       2020-10-22T21:19:45Z
       
       1 likes, 0 repeats
       
       Some ISAs (AMD64, for example) provide hardware carry in and out flags, making addition of numbers longer than the width of it's registers easier. Some other (e.g. RISC-V) don't, because they are meant to be minimal, and you can calculate say the overflow of x + y as just x > ~y (compared as unsigned). And BTW that's how the carry bits in look-ahead adder are calculated anyway, offering logarithmic, instead of linear, depth/speed.This seems to also be one of the reasons why #C doesn't have standard constructs for the checks. OTOH, C's fucked up type-system makes things like this very fragile and quirky, even if technically feasible.Sure there are #GCC extensions (builtins) for a lot of tasks like this, but they're just that, extensions, non-standard.#arithmetic
       
 (DIR) Post #A0SZa2ZcpL2LcuBPvc by gassahara@mstdn.io
       2020-10-22T21:40:05Z
       
       0 likes, 0 repeats
       
       @amiloradovsky this is important to note (not all arithmetics are created equal :) ), would you like to share some context? and what is it about the C type system that makes what operations difficult?
       
 (DIR) Post #A0SZa6AFTd7AlMLCvg by amiloradovsky@functional.cafe
       2020-10-22T21:47:13Z
       
       0 likes, 0 repeats
       
       @gassahara I hate guessing the width of int and long int on different ISAs. It's representation of strings and characters, lack of booleans, automatic and unpredictable type-casting. Well, overflow checking. Global errno and lack of exceptions. Preprocessor, macros as templates and constants, files as modules. The list goes on and on, just off the top of my head.
       
 (DIR) Post #A0SZa82sU0PIb4uLia by gassahara@mstdn.io
       2020-10-22T21:58:14Z
       
       0 likes, 0 repeats
       
       @amiloradovsky Oh OK :) I meant the post said "things like this", and dind't know what yo meant :).You're totally right, it just *seems* natural to make boolean operations on ints but is certainly *not* natural, would you agree that the hacky solution of using only char to make pointers of byte size elements (and processing them like one does with arbitrary precission) is a way of achieving this in a standard way? (this is about porting C programs in other architectures right?)
       
 (DIR) Post #A0SZa8Z8Y4DAD80704 by amiloradovsky@functional.cafe
       2020-10-22T22:23:54Z
       
       0 likes, 0 repeats
       
       @gassahara You can embed booleans into integers and operate on them there with integer operations, but booleans are too important to be neglected like that, they deserve their own data-type.If you're going to implement unbounded (a.k.a. big/infinite) numbers, you better use longest width supported by the hardware (register size, memory bus size) to use as the digits.To make the overflow check work with different widths, you also should use longest supported type to compare the numbers in (and that's tricky).To communicate with the peripherals you don't use individual octets either, at least 32-bit-wide registers these days (read/written atomically).In any case, there is always an alignment one should consider, wider than just one octet.Strings also deserve their own data-type.
       
 (DIR) Post #A0SZa9unX18GOb8vOy by gassahara@mstdn.io
       2020-10-22T22:52:32Z
       
       0 likes, 0 repeats
       
       @amiloradovsky I Seeing your profile, i'm very intrigued and interested sorry, I still don't get where x > ~y  would be fragile in C though,what's the quirks that present when implementing it? would seem straight forward, no? about embedded booleans would need more context before making an opinion, since boolean operators, tests and size are somewhat controversial (Cantrill, Thompson, etc :) ), about strings I do strongly disagree :)
       
 (DIR) Post #A0SZaAjUUXgKvorPPs by amiloradovsky@functional.cafe
       2020-10-22T23:09:47Z
       
       0 likes, 0 repeats
       
       @gassahara WRT overflow, you need to compare the numbers as unsigned, and make sure all the widths match, or it won't work. The safest way is probably to make the type a parameter of the macros, just like with say min_t and  max_t, but you can't use typeof and only change the signedness. Cumbersome.This is the best I can think of:#define carry(width, x, y) ((u_int ## width ## _t) x > (u_int ## width ## _t) ~y)
       
 (DIR) Post #A0SZaBCClmeOMsILAm by gassahara@mstdn.io
       2020-10-22T23:31:19Z
       
       0 likes, 0 repeats
       
       @amiloradovsky and thank you this was a lengthy discussion in an implementation of mathematical modelling project we developed (implementation in C without external libraries), you'r objections prove me wrong though, i was certainly mistaken about the universality of this methods...
       
 (DIR) Post #A0SZaCeFN0fws8QFWK by amiloradovsky@functional.cafe
       2020-10-22T23:55:54Z
       
       0 likes, 0 repeats
       
       @gassahara Either way, it's not an obvious thing to consider, even though it's just about summation of integers.
       
 (DIR) Post #A0SZaFvNBn9p27ST8C by namark@qoto.org
       2020-10-23T17:13:14Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara multiplication has a similar problem, that C just completely ignored, and there are even less excises one can make there. Be it hardware or software, if you implement multiplication for x you need to give me the result in 2x, otherwise don't implement multiplication and give me efficient primitives to do it myself. I think it makes sense to generalize this to addition as well, the high number being a boolean(separate type or not). I don't like the approach of having a biggest type, because then what do I do when multiplying two of those? I don't have a generic interface to work with. Numbers that are strictly allowed or disallowed to overflow are useful as well, but at the foundation we should have more expressive interface.
       
 (DIR) Post #A0SmqHXUFh9k4wG2kq by amiloradovsky@functional.cafe
       2020-10-23T19:41:56Z
       
       0 likes, 0 repeats
       
       @namark @gassahara Well, for multiplication you usually need to multiply an index by a small constant (size of the element of an array), and the result must not exceed the maximum address supported by the CPU. Generally speaking all the arithmetic the CPU is doing is on addresses, otherwise it's a job for a dedicated mathematical co-processor (FPU, GPU, DSP, etc.); and/or there should be a dedicated instructions for the computations (SIMD, MIMD).To make sure the result is contained within the range of the numeric type, you should first convert (cast) the value of the small type into a(t least twice) larger one. BTW I can hardly imagine why a machine may need 128-bit address space, but hardware support for this large numbers is useful to make infinite range arithmetic faster (cryptography, number theory studies, something else?).
       
 (DIR) Post #A0SoH0vSEDvSKGmMLI by gassahara@mstdn.io
       2020-10-23T19:49:22Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @namark there is a reason why https://gmplib.org is a thing =), such operations are really difficult to get in something even as high level as C (still thinking we're talking about portability), erlang is relevant, it's solution was to make arithmetic functions (messages) and numbers actors, that way you could always ask more, this has two caveats: large runtime (in comparison with C) and fixed width atoms, this is 1000bit number is a lot of say 32bit atoms
       
 (DIR) Post #A0SoH1JCnuvNVvtKMa by amiloradovsky@functional.cafe
       2020-10-23T19:56:02Z
       
       1 likes, 0 repeats
       
       @gassahara @namark I wouldn't call C a high-level language: this is as low-level as you can get, without resorting to an assembler (not required actually, because the compilers provide hardware-dependent libraries to access the specific features).GMP, yep, is what I'd use for an unbounded arithmetic.
       
 (DIR) Post #A0SoKav6rHYgpUKDcu by newt@stereophonic.space
       2020-10-23T19:58:39.451741Z
       
       3 likes, 1 repeats
       
       @amiloradovsky @gassahara @namark C is not a low-level language.Obligatory link to my favorite article about C: https://queue.acm.org/detail.cfm?id=3212479
       
 (DIR) Post #A0Sps9ZL9hfznkhzn6 by amiloradovsky@functional.cafe
       2020-10-23T20:13:24Z
       
       1 likes, 0 repeats
       
       @newt @gassahara @namark C is low-level in the sense it operates with not very abstract primitives. The way the program is written is pretty much what it will look like compiled and loaded into the memory.That a lot of the high-end hardware is pretending to be dumber than it is does not mean that C is high-level.Sure it would be nice if the hardware exposed more of it's internal kitchen for direct manipulation by the program, and the compiler could take advantage of it.And sure it would be nice if more people did low-level programming (interrupt handling, reading and writing of hardware registers, implementation of highly-optimized data-structures) in a language with a saner type-system.
       
 (DIR) Post #A0Sq9nUBzxauWaiKeW by newt@stereophonic.space
       2020-10-23T20:19:06.919817Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark C kinda assumes lots of things that aren't true in modern hardware. Flat memory, for one.Basicall, the only "low-level" feature of C is relying on pointers for memory management, yet pointers aren't even low-level per se for many reasons.I'd wouldn't call C a low-level language. It's just a crappy language that the whole industry has a stockholm syndrome with.
       
 (DIR) Post #A0StgltnDgh51N6TNQ by namark@qoto.org
       2020-10-23T20:58:40Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara I'm not talking about bignum algebra, I'm talking about arithmetic. Multiplication of  two digits should return two digits, it is simply natural for multiplication. The two digits returned do not magically multiply to 4, that's out of scope. If you want to discard the higher digit for your specific use case or raise an error when higher digit is set, you are free to do so. Someone else might want to discard the lower or implement further multiplication or specific error handling based on the amount of overflow.The problem with casting to a (virtual) wider type, is that it implies existence of the wider type on which you can do further arithmetic, and for which a wider type does not exist. It's an edge case and not a generic interface for arithmetic.
       
 (DIR) Post #A0SvSrG6h8q3Uz2ACm by amiloradovsky@functional.cafe
       2020-10-23T21:18:33Z
       
       0 likes, 0 repeats
       
       @namark @gassahara The question is how and why you implement it in the hardware: you don't need the upper part of the result doing address arithmetic, and multiplication returning only lower part is simpler to implement.BTW, even if the hardware doesn't compute the upper part of the result for you, you can still split the factors in parts and multiply them separately:(H0 * 10^n + L0 * 10^0) * (H1 * 10^n + L1 * 10^0) = H0 * H1 * 10^(2n) + (H0 * L1 + L0 * H1) * 10^n + LO * L1 * 10^0Here n is the width of the registers.All of the three components may be multiplied without overflowing, then just split the middle term in high and low part again and add to the products, left-shifted by -n/2 and +n/2 respectively.
       
 (DIR) Post #A0Svs7LE2NQGcLTMbA by amiloradovsky@functional.cafe
       2020-10-23T21:23:07Z
       
       0 likes, 0 repeats
       
       @namark @gassahara The question is how and why you implement it in the hardware: you don't need the upper part of the result doing address arithmetic, and multiplication returning only lower part is simpler to implement.BTW, even if the hardware doesn't compute the upper part of the result for you, you can still split the factors in parts and multiply them separately:(H0 * 10^n + L0 * 10^0) * (H1 * 10^n + L1 * 10^0) = H0 * H1 * 10^(2n) + (H0 * L1 + L0 * H1) * 10^n + LO * L1 * 10^0Here 2n is the width of the registers.All of the three components may be multiplied without overflowing, then just split the middle term in high and low part again and add to the products, left-shifted by -n and +n respectively.
       
 (DIR) Post #A0SyCIuOwJxQC1gAyG by gassahara@mstdn.io
       2020-10-23T21:48:54Z
       
       1 likes, 0 repeats
       
       @newt @amiloradovsky @namark totally, any old language would work in embedded in my opinion, C is just what we're used to =)
       
 (DIR) Post #A0SyMv0s7i6bGdldPU by amiloradovsky@functional.cafe
       2020-10-23T20:33:28Z
       
       0 likes, 0 repeats
       
       @newt @gassahara @namark You use section attributes to place different sorts of code and data into different sections. Then you use the linker script to place different sections into different memories/memory regions (RAM, ROM, ICCM, DCCM, etc.).Pointers is how you address the memory, and making I/O ports memory-mapped is better than dedicated i/O instructions.Sure you have to mark the regions with an attributes, to make sure only true memory accesses are cacheable, and the ones accessed with side-effects (I/O) are not.In general you still have a single address space with different types of memory mapped into specific regions, and I see no problem with that.What is your first contender to replace C in embedded and OS kernels?
       
 (DIR) Post #A0SyMvF3GzRpycOxVo by newt@stereophonic.space
       2020-10-23T21:51:05.398241Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark my god! Literally anything that is capable of producing code running without runtime. There are variants of C# and Python capable of running on microcontrollers. Hell, you could even make a subset of Haskell capable of this.
       
 (DIR) Post #A0T0u02idxKIKcX7tw by amiloradovsky@functional.cafe
       2020-10-23T22:04:37Z
       
       0 likes, 0 repeats
       
       @newt @gassahara @namark Running without runtime rules out most of the languages ever existed, because they use automatic memory management.Python is dynamically-typed, not the best choice for any critical parts. And I'd have troubles figuring out how it's data-structures correspond to the memory regions.I seriously doubt Haskell may come anywhere close to running in only a few tens of K's of memory.
       
 (DIR) Post #A0T0u99QmN2ia2k772 by newt@stereophonic.space
       2020-10-23T22:19:28.604588Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark you are missing the point. Haskell as you know it, with all the libraries and stuff, surely won't fit. But a subset just might. I mean, it worked for Java and C#.
       
 (DIR) Post #A0T15VnqtrgG1H4KrA by amiloradovsky@functional.cafe
       2020-10-23T22:20:49Z
       
       0 likes, 0 repeats
       
       @newt @gassahara @namark That would be just C but with slightly different syntax.
       
 (DIR) Post #A0T15WIh3CLnYvUxvc by newt@stereophonic.space
       2020-10-23T22:21:35.138514Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark no, it would not. Does J2ME look like C to you?
       
 (DIR) Post #A0T1fW6pEv4ZOT70JE by gassahara@mstdn.io
       2020-10-23T22:00:43Z
       
       1 likes, 0 repeats
       
       @newt  I think i saw an example of embedded Haskell, can't remember where, Python I don't know, did you see David Patterson's ACM acceptance speech?, (20x slower than C) readability is important however. I would recommend something like microLISP (http://www.ulisp.com/show?1AA0), if forced to suggest alternatives, my first option however is still C because is the language of  the system (in Unixes) and is convenient to have a multi platform project slim project =) [for whatever that's worth now]
       
 (DIR) Post #A0T1jDTmRkVClyc4Nk by amiloradovsky@functional.cafe
       2020-10-23T22:24:14Z
       
       0 likes, 0 repeats
       
       @newt @gassahara @namark I don't know Java, but, yes, it pretty much does.
       
 (DIR) Post #A0T1jLXGUg2jl6IxvM by newt@stereophonic.space
       2020-10-23T22:28:43.715842Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark erhm... Are you sure? Because the two have completely different semantics.
       
 (DIR) Post #A0T1tktZ3u81J2hNjM by newt@stereophonic.space
       2020-10-23T22:30:39.118891Z
       
       0 likes, 0 repeats
       
       @gassahara there are eDSLs in Haskell targeting embedded platforms. Which is another way of approaching this problem.
       
 (DIR) Post #A0T3WyScexfOcuMU40 by amiloradovsky@functional.cafe
       2020-10-23T22:39:14Z
       
       0 likes, 0 repeats
       
       @newt @gassahara @namark I'm not sure, as I said, I don't know Java.What are the differences then?
       
 (DIR) Post #A0T3X6vz9fuKuIA2wS by newt@stereophonic.space
       2020-10-23T22:48:55.163938Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara @namark erhm... Everything? I mean, these are different languages. Syntax is similar, some basic constructs like loops also are, but java treats objects differently and doesn't even have pointers per se.
       
 (DIR) Post #A0T5WGRWW4ic14dXbU by amiloradovsky@functional.cafe
       2020-10-23T23:01:05Z
       
       1 likes, 0 repeats
       
       @newt @gassahara @namark One might treat every value as a pointer, but even then one still has to have a means to "unbox" it. You need to have something to read and write bit sequences of fixed width into fixed addresses, and be sure those are atomic. Also wrapping everything onto a "box", is not necessarily adequate when the memory is scarce.Java seems good in that it doesn't use "files as modules" but classes as module templates (generics). And AFAIK one can use recursion instead of loops. Does it distinguish booleans and integers?Still I think it is way behind ML. Would be nice if there was a "Systems ML" (Rust isn't quite that).
       
 (DIR) Post #A0T6yBzekjaaHMU1JI by namark@qoto.org
       2020-10-23T23:27:28Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara I don't know much about hardware, if the word halfing is more efficient than, say, a separate "now give me the higher word" instruction, or if some other hardware subsystem has to do it, then sure, I'll let the compiler handle it. I was just yelling at C, and every other language that inherited(or originated) that arithmetic interface. I want to write generic code that works on all arithmetic types, not all but the largest, and even works on weird user defined types if they implement a basic arithmetic interface, not requiring an illusive wider type, that at worst needs arithmetic of its own, and at best a weird bitshift interface. I think there is a whole world of arithmetic between modular and infinite, not even necessarily on conventional numbers.If I go with the word halfing in generic code, in addition to making the compiler's job way harder, what do I do when the user defined type is a single indivisible digit? I cry, cause it's a backwards hack, not a proper interface for arithmetic. So, at the end of the day, I have to handle so many edge cases that I never got around to doing it. The only language I know that has something close to what I want is swift, and I hate that so very much...
       
 (DIR) Post #A0T7onROlkPRTjXinA by amiloradovsky@functional.cafe
       2020-10-23T23:36:58Z
       
       0 likes, 0 repeats
       
       @namark @gassahara This is definitely possible to make an instruction which would use four registers to multiply two of them and place the result in the remaining two. That's how the multiplication in IA-32 was implemented, although you didn't get to choose in what registers to sore the result. Not sure about AMD64 (does it use RAX and RBX for the result as well?).But when was the last time you needed to multiply two numbers whose product wouldn't fit in 64 bits?
       
 (DIR) Post #A0T7vttHBi9JVbyCKe by amiloradovsky@functional.cafe
       2020-10-23T23:38:16Z
       
       0 likes, 0 repeats
       
       @namark @gassahara This is definitely possible to make an instruction which would use four registers to multiply two of them and place the result in the remaining two. That's how the multiplication in IA-32 was implemented, although you didn't get to choose in what registers to store the result. Not sure about AMD64 (does it use RAX and RBX for the result as well?).But when was the last time you needed to multiply two numbers whose product wouldn't fit in 64 bits?
       
 (DIR) Post #A0T8x3Co8kzsduDZoW by namark@qoto.org
       2020-10-23T23:49:40Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara when I was writing generic code, and It wasn't any number of bit, it was digit A and digit B, and I needed to add them (not advanced enough to multiply yet, but some day), wishing it to be compiled, optimized and ran on all imaginable and unimaginable architectures/platforms (including those that are used to design hardware) for all possible and impossible purposes. This is my fetish yes.
       
 (DIR) Post #A0TAFbG0NMtzQ07e8e by amiloradovsky@functional.cafe
       2020-10-24T00:04:14Z
       
       0 likes, 0 repeats
       
       @namark @gassahara You could use hardware-dependent pieces to make it more efficient on that particular platform, and use the algorithm descried above by default.
       
 (DIR) Post #A0U3j9DrT2OxJjYKEi by namark@qoto.org
       2020-10-24T10:25:49Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara But I don't want hardware dependent pieces in my code ToT I want them in the compiler. The language must be expressive to make the job of the compiler writers easier, and also fit similar use cases for types that have nothing to do with the hardware. If you want it to work with native ALU word or smallest SIMD/GPU size that can fit the input for best performance, or a special type representing one ternary bit to be compiled into a circuit diagram, or something that's not even a number in conventional sense, you customize that through the type system, while the algorithm and the interface remain generic, cause it's arithmetic, I didn't make it up, ancient mathematicians did, and it survived to this day. As it stands the C style arithmetic types are edge cases that I have to work around, while their essence can be easily captured by the interface that returns two digits/numbers. A type that only needs lower part and just doesn't care if it overflows, can always set the higher part to zero (or special zero type if you have a type system) and the compiler can see that and optimize. An infinite precision, or more precisely an "assumed to not overflow" (I'm looking at you, int!), type can do the same.
       
 (DIR) Post #A0URBA646m7j5CEWP2 by amiloradovsky@functional.cafe
       2020-10-24T14:48:37Z
       
       0 likes, 0 repeats
       
       @namark @gassahara You have to have a hardware-dependent pieces if you want efficiency: generic code is usually not the most optimal.The compiler's standard library is also supposed to contain only widely used pieces: the niche ones are better placed into a separate libraries.The "full" multiplication is a niche application: again what you multiply is usually an index by a size to get an address shift. You perform problem-space computations with a dedicated instructions/processors.Expressivity of a language tends to make the job of the compiler writer harder, because it costs complexity.Generality is good and all, but it's not always obvious how to make a specification into something executable or synthesizable, especially the latter: the synthesis tools (for hardware) are far behind the compilers (for software).This behavior is not specific to C, but most languages which use fixed-width integers by default: because it's niche. There are overall better options than C, even for low-level programming, and reasons why the replacement is superior, but the multiplication leaving only the lower part is not one of those.
       
 (DIR) Post #A0UVWt22dSQ2LPxwMi by gassahara@mstdn.io
       2020-10-24T14:54:50Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @namark Completely agree, what a great exposition  of Expressivity vs Granularity (open problem), glad to see the thread mutated into this, since is not a problem of /missing abstractions/ on C so much as a lack of abstractions that are compatible with universal constraints (My Haskell code *won't* run on embedded and it will be a pain to simulate similar behaviour with compiler-close-to-hardware constructs)
       
 (DIR) Post #A0UVX1qfr5fpgliUOe by gassahara@mstdn.io
       2020-10-24T14:58:47Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @namark I think Curry-Howard is relevant here, since developments in any of both science in this aspect can have greater repercussions. Id's development of sqrt trick comes to mind as JVM developments on embedded (although last one turned out quite ugly)
       
 (DIR) Post #A0UVXCGv6Mbs144ah6 by namark@qoto.org
       2020-10-24T15:37:21Z
       
       0 likes, 0 repeats
       
       @gassahara @amiloradovsky the sqrt is another example of horrible interface. There is no such thing as square root, there is no known arithmetic that can compute it, there is no known algebra that works with it. There is a Babylonian method for example which naturally requires an initial guess, and a terminating condition (precision check or number of iterations). This interface cleanly accommodates the sqrt trick, and many other so called optimizations of square root. I don't want you to not be able to tweak the parameters, I want your tweaking to be legible and self documenting. I don't think integer overflow is a niche problem. It comes up quite often in various contexts in my experience. I went on with multiplication interface, because it applies to addition as well, and I think is a good basis for a generic arithmetic interface (or only acceptable basis). Generic code is and will be more efficient than specific code, as systems get more complex and the line between hardware and software blurs (especially if we finally have free markets on production of these), most optimization will be done by compliers, as a simple matter of separation of concerns. It is also our only hope at avoiding degenerate hardware/software that is optimized to run old hardware specific code. Generic code must be as expressive as possible, rejecting abstraction that do not properly capture the essence of the algorithms, otherwise it's not generic, it's a mistake.I wish I could write a this niche library of arithmetic that supports fundamental types, but I can't without breaking the C/C++ standard.
       
 (DIR) Post #A0UVz9nnsRdEStmp3A by gassahara@mstdn.io
       2020-10-24T15:42:29Z
       
       0 likes, 0 repeats
       
       @namark @amiloradovsky Gerald Sussman's lecture on Dan Friedman's 60th Birthday: Legacy of Computers, is in my library and comes to mind (I don't think you're right =) ), I would share the link but is only on youtube and we should not be promoting access to that site
       
 (DIR) Post #A0UX0Z4jUvuB5kwLhI by namark@qoto.org
       2020-10-24T15:53:48Z
       
       0 likes, 0 repeats
       
       @gassahara @amiloradovsky are we listing youtube lectures now?efficient programming with components - Alexander Stepanovprogramming conversations - Aleander Stepanoveverything else you can find(and let me know) by Alexander Stepanov
       
 (DIR) Post #A0UX6LvxGrCA4UauMC by namark@qoto.org
       2020-10-24T15:54:58Z
       
       0 likes, 0 repeats
       
       @gassahara @amiloradovsky not the nail polish guy!
       
 (DIR) Post #A0UZ7fkxFxC5kRCpxQ by amiloradovsky@functional.cafe
       2020-10-24T16:17:38Z
       
       0 likes, 0 repeats
       
       @namark @gassahara WRT square root, those are irrational (although algebraic) numbers and you can't represent them precisely in a computer, unless nominally. Their approximate values are computed as a series, rounded, so the distinction between transcendental and algebraic numbers doesn't matter here anyway.If you're interested in the nominal representation, try Pari and the field extensions.Meanwhile, we compute the multiplicative inverse also as a series, and it's also only approximation, because floating-pint numbers can't represent most rationals exactly.It's niche because most programs only need multiplication to compute the addresses.Generic code tends to take up more space and time to execute, because it's defined with a lower-level primitives, guaranteed to be present everywhere, while particular platforms may offer a higher-level instructions and optimize it further in hardware.Anyway, if you aim for the most generality, use Coq and extract the programs from it. There you can define whatever model you like and operate within it. The extraction won't be trivial in most cases though.WRT the library, have you looked at mpn from GMP? It's a fast and low-level interface for computations with fixed number of "digits".
       
 (DIR) Post #A0Uajvo4897YU8T5ns by gassahara@mstdn.io
       2020-10-24T16:30:19Z
       
       0 likes, 0 repeats
       
       @namark: Thank you for the reference about generics,I think you and me have different goals on computational methods, your view is more about representations (let me make a guarded haskell expression and let other people worry about implementation),there are many tools for that https://functional.cafe/@amiloradovsky/105090662946044943, +2 for Coq =), ghc is the most advanced tool today), mine is about the smallest more efficient way that any computer could do a specific operation or function,portability is extremely important
       
 (DIR) Post #A0Uajw7CyyQvRVQNdo by namark@qoto.org
       2020-10-24T16:35:38Z
       
       0 likes, 0 repeats
       
       @gassahara you completely misinterpret my stance it seems. Minimal efficient abstraction is my goal as well, throughout mathematics hardware and software, I just don't draw any arbitrary lines. Behind the hardware design that you try to optimize for today is a programmer like you. Tomorrow you should both be engineers, working together and not plating blind catch-up with each other.
       
 (DIR) Post #A0Ufb9L4wmt1cUXeV6 by namark@qoto.org
       2020-10-24T17:30:11Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @gassahara "The mpn functions are designed to be as fast as possible not to provide a coherent calling interface", Doesn't sound very promising, though I wouldn't be surprised if GMP has very similar abstractions to what I'm talking about here. I'm saying a better version of those lowest level abstractions should be standard. I would even strive to make the operators work with that.Coq I'm not familiar with, but It didn't seem very generic to me from your posts, sounded like a very specialized environment, where you can allow yourself to (or even have to) ignore a whole lot of reality. I don't want to make my own perfect world, I want to code to a standard. Rather than reinventing all the wheels, my time would be better spend writing one of those completely ignored proposals for the c++ standard, if I ever get around to it.
       
 (DIR) Post #A0UgUXCHwcHHXeq7Ye by amiloradovsky@functional.cafe
       2020-10-24T17:40:12Z
       
       0 likes, 0 repeats
       
       @namark @gassahara There is no reason to care too much about standards: if you don't really care how to express something, use a standard way; if you do, and all the standard ways aren't acceptable, no need to comply with any of those.Coq is where you can encode the data-structures and algorithms in most generality and prove a properties about them. Then you can sometimes/oftentimes (depending of what kind of theory you're trying to encode) turn them into a programs in other (less obscure) languages.If that's too abstract and far-fetched, and you want practicality, then it's inevitable that the readily-available tools are far from ideal. (That's why I don't think anymore that ranting about industrial standards is worth it: they're worth not for their elegance after all).
       
 (DIR) Post #A0UlDv9k70QAhHXvzk by gassahara@mstdn.io
       2020-10-24T17:46:41Z
       
       0 likes, 0 repeats
       
       @amiloradovsky @namark I don't think the intention of the standards is human to human communication but human to compiler, standards is the way you make sure your code will compile on any machine (with that version =) ). Stroustrup is convinced that STL Generics is the way to go on embedded, so, there is that (is generics and is portable per C++20something?)
       
 (DIR) Post #A0UlE5UJhMoujzFVS4 by amiloradovsky@functional.cafe
       2020-10-24T17:54:00Z
       
       0 likes, 0 repeats
       
       @gassahara @namark I don't see why templates/generics/parametric module couldn't be used in embedded applications, it's just a presentation for the programmer, to not maintain the same data-structures for every set of parameters. CPP macros are inconvenient, too generic, and error prone.Portability nowadays is portability between ISAs, compatibility with proprietary build tools is not strictly speaking a priority.
       
 (DIR) Post #A0UlEF4o34M8TlOH4q by namark@qoto.org
       2020-10-24T18:33:12Z
       
       0 likes, 0 repeats
       
       @gassahara Compiler writers are also human! Intention of not only the standard but of programming languages (and mathematics) is primarily human to human communication. You need to be able to read what I wrote, otherwise it's useless. Yes It's not a teaching material, it's a reference, but the agreement on it is where the portability comes from fundamentally. It doesn't seem human only because it's a very big relationship.Stroustrup's friend Stepanov also convinced me that C++ style generics is the way to go with programming in general (hehe, "general"). @amiloradovsky Yes reailty is imperfect, and we must yell at it, so that others hear us and be convinced. That is all that matters. Coq still seems way beyond my comprehension, too many steps the purpose of which is unclear to me. When it comes to formal systems I think mathematicians have a lot more work to do there first, maybe using Coq.
       
 (DIR) Post #A0UltV6LNDVM3Hmcc4 by gassahara@mstdn.io
       2020-10-24T18:40:44Z
       
       0 likes, 0 repeats
       
       @namark @amiloradovsky "Compiler writers are also human!" [looks at my GCC altar: Heretics!] XD"Stroustrup's friend Stepanov..." I still don't know how that would help with performance, abstractions are nice to the human but specific implementations are something of which to be aware [spectre haunts us still :), and my "heartbleed" :) ]"When it comes to formal systems" That's the thing ,we need to be aware of the whole of curry-howard, CS owns Knuth a great deal on that regard! =)
       
 (DIR) Post #A0V3QNg5h1gHo0lfqy by namark@qoto.org
       2020-10-24T21:57:00Z
       
       0 likes, 0 repeats
       
       @gassahara @amiloradovsky Look at em trying to dodge bugs!https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87605Good thing there's always the friendly neighborhood standard expert to back you up.And again you disregard us poor humans, while I'm trying to convince you there is nothing else. There is only us. Abstractions are not just nice for us there are our only choice to do anything, and we are doing everything that there is in this industry. That's why good tools to write good abstractions are important. For example, I often see indirection or runtime type information used in C due to lack of proper type system or generics. Do you want userdata without indirection? Do you want callbacks without indirection? C++ type system and generics got you covered. And there is much more in the same vein. With C there is general tendency to try to break away from the limitations of the language by pushing abstraction out of it, either to the runtime or to external tools (like code generators). You shouldn't be generating code, compilers generate code, become a compiler developer instead, add your language extension to gcc (or llvm, if you are a pussy) then try to standardize it and get roasted. Do good for the humanity (likely not approved by your current employer) not for your job security, by writing code no one else can read. Don't do it out of altruism but out of professional pride, which would be repeatedly beaten out of you by the standard's committee.Sure be aware of your system, provide a proper initial guess and number of iterations for Babylonian method that makes sense for your system or use case, but do not implement the basic algorithm over and over again, getting it wrong half the time. I would even say write a compile time function to compute the initial guess, instead of doing it on paper and leaving a mysterious magic number behind. If the compiler does not optimize your obvious expressive abstraction, become a compiler developer again, or nag them until they fix it. For every barely exploitable spectre, there are probably a dozen of embarrassingly exploitable software vulnerabilities, not to mention that it is our culture of writing hardware specific code that drove the hardware engineers to the overcomplicated designs that cause these problems, forcing them to optimize old instruction sets instead of developing new ones. Not our fault for doing that initially, is our fault for sticking with it till today, and planning to stick with it in the future.Yet even in our current climate the fact that the compilers (written by humans) can beat people at optimizing certain common abstractions is a testament to the effectiveness of the approach. They are even adding GPU backends for them. I for one am waiting for a new wave of verilog backbends.