eliben / code-for-blog Goto Github PK
View Code? Open in Web Editor NEWCode samples from my blog
License: The Unlicense
Code samples from my blog
License: The Unlicense
Just for joke... but https://crates.io/crates/rstest is exactly designed for this king of jobs :)
p.s. Disclaimer ... I' the rstest
's maintainer :)
Here, https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/uv-server.c in line 11, You included #include "uv.h"
But where is the actual file?
I tried your recursive descent parser code and it worked, but when I tried:
print(p.calc("1>1!=2"))
I got this error:
Traceback (most recent call last):
File "main.py", line 452, in <module>
print(p.calc("1>1!=2"))
File "main.py", line 126, in calc
self.cur_token.val, self.cur_token.pos))
File "main.py", line 169, in _error
raise ParseError(msg)
__main__.ParseError: Unexpected token != (at #3)
Why does this happen?
int main(int argc, const char** argv) {
unsigned num_cpus = std::thread::hardware_concurrency();
std::cout << "Launching " << num_cpus << " threads\n";
// A mutex ensures orderly access to std::cout from multiple threads.
std::mutex iomutex;
std::vector<std::thread> threads(num_cpus);
for (unsigned i = 0; i < num_cpus; ++i) {
threads[i] = std::thread([&iomutex, i] {
{
// Use a lexical scope and lock_guard to safely lock the mutex only for
// the duration of std::cout usage.
std::lock_guard<std::mutex> iolock(iomutex);
std::cout << "Thread #" << i << " is running\n";
}
// Simulate important work done by the tread by sleeping for a bit...
std::this_thread::sleep_for(std::chrono::milliseconds(200));
});
}
for (auto& t : threads) {
t.join();
}
return 0;
}
This is from your blog. I was wondering the way you create the threads based on the number of CPU cores, isn't that 1 more ?
The main function is already a thread right ?
Also threads[i] = std::thread([&iomutex, i]
what is this syntax ? I couldn't find an explanation about it.
Hi,
Thanks for the code. Shouldn't size_t be std::size_t throughout tuple.h. For example, shouldn'n
template <size_t, class> struct elem_type_holder;
be
template <std::size_t, class> struct elem_type_holder;
Hello,
I am trying to run your code for distributed python.
When I run it "manually" everything goes fine (the server starts and I can than connect the clients to it).
However the server never starts when I try to launch the whole thing via SBATCH / srun
. I can see the python script starting, it reach the manager.start()
line (a print
added to the previous line is printed) but never goes to the following (I have no error message, but the print
of the following line is never displayed and the clients cannot connect to the server).
I have also tried to log into the nodes and then run the script "manually" from there: in that case everything is OK, so it is really linked to how srun
launches the python script... do you have any idea of what can go wrong here?
Thanks in advance.
I'm following your excellent tutorial on writing a Go REST server. Part 3 uses gin framework.
In the gin version of createTaskHandler
, only c.String(http.StatusBadRequest, err.Error())
is called if err
isn't nil
. Should there be a return
statement at the end of the error handling block?
In other words:
func (ts *taskServer) createTaskHandler(c *gin.Context) {
type RequestTask struct {
Text string `json:"text"`
Tags []string `json:"tags"`
Due time.Time `json:"due"`
}
var rt RequestTask
if err := c.ShouldBindJSON(&rt); err != nil {
c.String(http.StatusBadRequest, err.Error())
//
// Should there be a `return` statement here?
//
}
id := ts.store.CreateTask(rt.Text, rt.Tags, rt.Due)
c.JSON(http.StatusOK, gin.H{"Id": id})
}
Your rss feeds only go back 10 entries, possible to get a new feed for all of them which go way back?
i.e. new feeds (or modify old feeds, not sure how this effects the current subscribers).
eg. https://eli.thegreenplace.net/feeds/go.atom.xml and now https://eli.thegreenplace.net/feeds/go.full.atom.xml
https://eli.thegreenplace.net/feeds/all.atom.xml and https://eli.thegreenplace.net/feeds/all.full.atom.xml
Thanks for this - all sorts of great stuff in here. Obviously this isn't an issue, per se, but it felt like a decent way to ask the question!
I've been looking at implementing graceful shutdown for a project we're working on, but struggled to get your demo to run in the way I expected. I've created:
shutdown.go
:
// This version expects all clients to close their connections before it
// successfully returns from Stop().
//
// Eli Bendersky [https://eli.thegreenplace.net]
// This code is in the public domain.
package main
import (
"io"
"log"
"net"
"sync"
)
type Server struct {
listener net.Listener
quit chan interface{}
wg sync.WaitGroup
}
func NewServer(addr string) *Server {
s := &Server{
quit: make(chan interface{}),
}
l, err := net.Listen("tcp", addr)
if err != nil {
log.Fatal(err)
}
s.listener = l
s.wg.Add(1)
go s.serve()
return s
}
func (s *Server) Stop() {
close(s.quit)
s.listener.Close()
s.wg.Wait()
}
func (s *Server) serve() {
defer s.wg.Done()
for {
conn, err := s.listener.Accept()
if err != nil {
select {
case <-s.quit:
return
default:
log.Println("accept error", err)
}
} else {
s.wg.Add(1)
go func() {
s.handleConection(conn)
s.wg.Done()
}()
}
}
}
func (s *Server) handleConection(conn net.Conn) {
defer conn.Close()
buf := make([]byte, 2048)
for {
n, err := conn.Read(buf)
if err != nil && err != io.EOF {
log.Println("read error", err)
return
}
if n == 0 {
return
}
log.Printf("received from %v: %s", conn.RemoteAddr(), string(buf[:n]))
}
}
func init() {
log.SetFlags(log.Ltime | log.Lmicroseconds)
}
func main() {
log.Println("Starting")
s := NewServer("0.0.0.0:1234")
// do whatever here...
log.Println("Whatever")
log.Println("Stopping")
s.Stop()
log.Println("Stopped")
}
It's a direct copy/paste from this repo, with a main()
function invoking it (And a couple of extra bits of logging).
>> go run shutdown.go
What I expected - I expected a blocking server, listening on :1234
What happens - it completes immediately:
➜ go run shutdown.go
19:34:44.492644 Starting
19:34:44.493001 Whatever
19:34:44.493002 Stopping
19:34:44.493036 Stopped
➜
What am I missing? What should the // do whatever here...
actually contain if I just want a server listening until I SIGKILL
the process? I presume I need something to block - but based on your description I thought that's what the code was doing?
I was trying code from regex_fsm and it failes to match if my pattern is "12ab*c" and input string is "12abbc1".
To check if we have reached at the end of pattern, if modified DFA::simulate function as
int DFA::simulate(std::string input)
{
uint32_t currState = start;
int count = 0;
for (auto i : input)
{
count++; // count number of chars matched
transition t = std::make_pair(currState, i);
if (transTable.find(t) == transTable.end())
{
// added this if block
if (final.find(currState) != final.end())
{
return count - 1; // found
}
return -1; // failure
}
currState = transTable[t];
}
if (final.find(currState) != final.end())
{
return count; // found
}
else
{
return -1; // failure
}
}
but it still fails as the transition for the last character in input string ie "1" is present in DFA::trans_table.
Any suggestions?
[email protected]
Firstly, thanks for your awesome blog articles, I really enjoyed reading them.
Recently I tried to run your bfjit
code. I found simpleasmjit.cpp
can compile successfully on my Mac, but when I ran it, I got a segmentation fault
. It was caused by calling the asmjit emitted code.
I am not familiar with assembly code, I have read the code (simpleasmjit), but I couldn't find any system sepcific feature in that file. I am not sure it's because of simpleasmjit
is using linux specific feature or there is a "bug" in asmjit
? Could you give me some advices to dive into it?
I installed pygments yet unable to import lexar
tried import pygments.lexar as lexar
yet error was thrown out
File "unify.py", line 210, in <module> subst = unify(parse_term(s1), parse_term(s2), {}) File "unify.py", line 57, in parse_term parser = TermParser(s) File "unify.py", line 75, in __init__ self.lexer = lexer.Lexer(lexrules, skip_whitespace=True) TypeError: __init__() takes 1 positional argument but 2 were given
Kindly help me out for running the code properly
in function on_peer_ready_send from file : select-server.c
fd_status_t on_peer_ready_send(int sockfd) {
assert(sockfd < MAXFDs);
peer_state_t* peerstate = &global_state[sockfd];
if (peerstate->sendptr >= peerstate->sendbuf_end) {
// Nothing to send.
return fd_status_RW;
}
int sendlen = peerstate->sendbuf_end - peerstate->sendptr;
int nsent = send(sockfd, peerstate->sendbuf, sendlen, 0);
// ...
if (nsent < sendlen) {
peerstate->sendptr += nsent;
return fd_status_W;
} else {
// Everything was sent successfully; reset the send queue.
peerstate->sendptr = 0;
peerstate->sendbuf_end = 0;
// ...
}
}
I think this line:
int nsent = send(sockfd, peerstate->sendbuf, sendlen, 0);
shoule be:
int nsent = send(sockfd, peerstate->sendbuf+peerstate->sendptr, sendlen, 0);
I am using the proxy to intercept my S3 POST request before sending it to destination.
For this, I need to read the request body.
If I try to read the body, It just hangs and later gives "Unexpected EOF" which happens when the body has been read somewhere upstream. When I look at the proxy, I see that it is reading the request here and my suspicion is, request.body not getting closed.
How can I read the request body here?
Thanks
I want to test the performance of adding bufio.NewReader
to the Go standard library XML streamer, but I don't have the XML you used. Can you post it somewhere?
I think there is an error In the following line of code when testing for error in the the returned value
I think it should be
if (ptrace(PTRACE_SINGLESTEP, pid, 0, 0) < 0) {
Umm... in 2019/xml-stream/c-libxmlsax-count/c-libxmlsax-count.c, line 42, don't you mean len + 1
instead of len - 1
?
(Also, you might as well check inLocation
before going to the trouble of copying the string.)
hello, eliben. I read the excellent blog() about how to benchmark the Linux OS thread switch, and carefully read and run the relevant code under threadoverhead
dir tree.
I have a question about why the iteration num don't need to multiple 2 in thread-pipe-msgpersec.c and channel-msgpersec.go, which I think each thread/goroutine is scheduled NUM_ITERATIONS/niters
times, so when calculating each scheduling delay, the total number of scheduled times should be the sum of the number of times two threads have been scheduled?
Hi,
I changed the code in qt_mpl_bars.py to be compatible with pyqt5.
Would you be able to merge it if I send you a pull request?
Thanks,
Gerald
hi,i'm learning your code,https://github.com/eliben/code-for-blog/tree/master/2017/async-socket-server/uv-server.c ,the uv-server.c has #include "uv.h",but i can't find the uv.h file.
can you show me where is the uv.h file?
Hi,
The post was a nice read about the thread context switch overheads.
However, I have one question. Even though you measure the pipe overhead but don't subtract that from the overall context switch overhead to get just the switching cost.
Shouldn't it be (elapsed / (float)nswitches) - overhead
here, where the value of overhead
is double overhead = measure_self_pipe(NUM_ITERATIONS);
lmbench
also subtracts the overhead from the overall context switch cost before printing Link.
It will be really helpful if you could explain, why are you not subtracting the overhead from the overall context switch cost.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.