Aug 13 00:46:55.831092 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:46:55.831135 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:55.831148 kernel: BIOS-provided physical RAM map: Aug 13 00:46:55.831157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:46:55.831165 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:46:55.831174 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:46:55.831184 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 00:46:55.831211 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 00:46:55.831222 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:46:55.831231 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:46:55.831240 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:46:55.831249 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:46:55.831257 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:46:55.831267 kernel: NX (Execute Disable) protection: active Aug 13 00:46:55.831282 kernel: APIC: Static calls initialized Aug 13 00:46:55.831292 kernel: SMBIOS 2.8 present. Aug 13 00:46:55.831308 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 00:46:55.831318 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:46:55.831328 kernel: Hypervisor detected: KVM Aug 13 00:46:55.831338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:46:55.831348 kernel: kvm-clock: using sched offset of 4632109514 cycles Aug 13 00:46:55.831359 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:46:55.831370 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:46:55.831384 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:46:55.831395 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:46:55.831405 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 00:46:55.831415 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:46:55.831425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:46:55.831435 kernel: Using GB pages for direct mapping Aug 13 00:46:55.831445 kernel: ACPI: Early table checksum verification disabled Aug 13 00:46:55.831456 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 00:46:55.831466 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831479 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831490 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831500 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 00:46:55.831510 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831520 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831530 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831540 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:55.831551 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 00:46:55.831568 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 00:46:55.831578 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 00:46:55.831589 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 00:46:55.831599 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 00:46:55.831610 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 00:46:55.831620 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 00:46:55.831633 kernel: No NUMA configuration found Aug 13 00:46:55.831655 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 00:46:55.831669 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Aug 13 00:46:55.831696 kernel: Zone ranges: Aug 13 00:46:55.831724 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:46:55.831747 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 00:46:55.831757 kernel: Normal empty Aug 13 00:46:55.831768 kernel: Device empty Aug 13 00:46:55.831779 kernel: Movable zone start for each node Aug 13 00:46:55.831793 kernel: Early memory node ranges Aug 13 00:46:55.831832 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:46:55.831842 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 00:46:55.831853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 00:46:55.831863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:46:55.831873 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:46:55.831887 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 00:46:55.831897 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:46:55.831909 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:46:55.831920 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:46:55.831934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:46:55.831944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:46:55.831954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:46:55.831964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:46:55.831974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:46:55.831984 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:46:55.831994 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:46:55.832005 kernel: TSC deadline timer available Aug 13 00:46:55.832015 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:46:55.832028 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:46:55.832038 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:46:55.832048 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:46:55.832058 kernel: CPU topo: Num. cores per package: 4 Aug 13 00:46:55.832068 kernel: CPU topo: Num. threads per package: 4 Aug 13 00:46:55.832078 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 13 00:46:55.832089 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:46:55.832099 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:46:55.832109 kernel: kvm-guest: setup PV sched yield Aug 13 00:46:55.832122 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:46:55.832132 kernel: Booting paravirtualized kernel on KVM Aug 13 00:46:55.832143 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:46:55.832154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:46:55.832164 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 13 00:46:55.832174 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 13 00:46:55.832183 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:46:55.832193 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:46:55.832311 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:46:55.832329 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:55.832340 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:46:55.832350 kernel: random: crng init done Aug 13 00:46:55.832361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:46:55.832371 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:46:55.832382 kernel: Fallback order for Node 0: 0 Aug 13 00:46:55.832392 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Aug 13 00:46:55.832403 kernel: Policy zone: DMA32 Aug 13 00:46:55.832417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:46:55.832428 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:46:55.832438 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:46:55.832449 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:46:55.832460 kernel: Dynamic Preempt: voluntary Aug 13 00:46:55.832470 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:46:55.832482 kernel: rcu: RCU event tracing is enabled. Aug 13 00:46:55.832492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:46:55.832503 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:46:55.832522 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:46:55.832533 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:46:55.832543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:46:55.832554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:46:55.832565 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:55.832575 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:55.832586 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:46:55.832597 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:46:55.832608 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:46:55.832630 kernel: Console: colour VGA+ 80x25 Aug 13 00:46:55.832641 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:46:55.832652 kernel: ACPI: Core revision 20240827 Aug 13 00:46:55.832666 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:46:55.832676 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:46:55.832687 kernel: x2apic enabled Aug 13 00:46:55.832697 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:46:55.832708 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:46:55.832719 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:46:55.832743 kernel: kvm-guest: setup PV IPIs Aug 13 00:46:55.832754 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:46:55.832765 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:46:55.832775 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:46:55.832786 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:46:55.832796 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:46:55.832806 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:46:55.832817 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:46:55.832830 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:46:55.832840 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:46:55.832849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:46:55.832857 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:46:55.832865 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:46:55.832876 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:46:55.832887 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:46:55.832898 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:46:55.832912 kernel: x86/bugs: return thunk changed Aug 13 00:46:55.832923 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:46:55.832934 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:46:55.832945 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:46:55.832956 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:46:55.832971 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:46:55.832999 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 00:46:55.833020 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:46:55.833032 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:46:55.833047 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:46:55.833057 kernel: landlock: Up and running. Aug 13 00:46:55.833068 kernel: SELinux: Initializing. Aug 13 00:46:55.833079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:55.833094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:55.833105 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:46:55.833115 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:46:55.833125 kernel: ... version: 0 Aug 13 00:46:55.833136 kernel: ... bit width: 48 Aug 13 00:46:55.833150 kernel: ... generic registers: 6 Aug 13 00:46:55.833161 kernel: ... value mask: 0000ffffffffffff Aug 13 00:46:55.833172 kernel: ... max period: 00007fffffffffff Aug 13 00:46:55.833183 kernel: ... fixed-purpose events: 0 Aug 13 00:46:55.833194 kernel: ... event mask: 000000000000003f Aug 13 00:46:55.833220 kernel: signal: max sigframe size: 1776 Aug 13 00:46:55.833231 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:46:55.833242 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:46:55.833253 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:46:55.833268 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:46:55.833278 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:46:55.833288 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 00:46:55.833299 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:46:55.833309 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:46:55.833323 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 136904K reserved, 0K cma-reserved) Aug 13 00:46:55.833336 kernel: devtmpfs: initialized Aug 13 00:46:55.833348 kernel: x86/mm: Memory block size: 128MB Aug 13 00:46:55.833361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:46:55.833378 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:46:55.833406 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:46:55.833431 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:46:55.833444 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:46:55.833458 kernel: audit: type=2000 audit(1755046012.639:1): state=initialized audit_enabled=0 res=1 Aug 13 00:46:55.833470 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:46:55.833483 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:46:55.833496 kernel: cpuidle: using governor menu Aug 13 00:46:55.833509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:46:55.833526 kernel: dca service started, version 1.12.1 Aug 13 00:46:55.833537 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 00:46:55.833547 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:46:55.833558 kernel: PCI: Using configuration type 1 for base access Aug 13 00:46:55.833568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:46:55.833579 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:46:55.833589 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:46:55.833598 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:46:55.833605 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:46:55.833616 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:46:55.833624 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:46:55.833632 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:46:55.833639 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:46:55.833647 kernel: ACPI: Interpreter enabled Aug 13 00:46:55.833655 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:46:55.833663 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:46:55.833671 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:46:55.833679 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:46:55.833689 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:46:55.833696 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:46:55.833936 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:46:55.834132 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:46:55.834305 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:46:55.834324 kernel: PCI host bridge to bus 0000:00 Aug 13 00:46:55.834478 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:46:55.834608 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:46:55.834724 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:46:55.834849 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:46:55.835006 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:55.835191 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 00:46:55.835358 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:46:55.835532 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:46:55.835680 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:46:55.835815 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 00:46:55.835936 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 00:46:55.836057 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:46:55.836281 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:46:55.836470 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:46:55.836631 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Aug 13 00:46:55.836801 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 00:46:55.836957 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:46:55.837132 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:55.837311 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Aug 13 00:46:55.837507 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 00:46:55.837667 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:46:55.837864 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:46:55.838024 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Aug 13 00:46:55.838181 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Aug 13 00:46:55.838421 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 00:46:55.838580 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:46:55.838769 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:46:55.838932 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:46:55.839133 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:46:55.839315 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Aug 13 00:46:55.839499 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Aug 13 00:46:55.839685 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:46:55.839860 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 00:46:55.839876 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:46:55.839894 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:46:55.839905 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:46:55.839917 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:46:55.839928 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:46:55.839940 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:46:55.839951 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:46:55.839962 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:46:55.839973 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:46:55.839985 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:46:55.839999 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:46:55.840010 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:46:55.840021 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:46:55.840033 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:46:55.840044 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:46:55.840055 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:46:55.840066 kernel: iommu: Default domain type: Translated Aug 13 00:46:55.840077 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:46:55.840089 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:46:55.840103 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:46:55.840114 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:46:55.840125 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 00:46:55.840322 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:46:55.840491 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:46:55.840644 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:46:55.840659 kernel: vgaarb: loaded Aug 13 00:46:55.840671 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:46:55.840683 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:46:55.840699 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:46:55.840710 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:46:55.840722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:46:55.840742 kernel: pnp: PnP ACPI init Aug 13 00:46:55.840936 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:46:55.840954 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:46:55.840966 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:46:55.840977 kernel: NET: Registered PF_INET protocol family Aug 13 00:46:55.840992 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:46:55.841003 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:46:55.841014 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:46:55.841025 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:46:55.841037 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:46:55.841048 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:46:55.841059 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:55.841071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:55.841085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:46:55.841096 kernel: NET: Registered PF_XDP protocol family Aug 13 00:46:55.841267 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:46:55.841443 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:46:55.841604 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:46:55.841772 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:46:55.841916 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:55.842056 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 00:46:55.842068 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:46:55.842082 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:46:55.842090 kernel: Initialise system trusted keyrings Aug 13 00:46:55.842099 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:46:55.842107 kernel: Key type asymmetric registered Aug 13 00:46:55.842115 kernel: Asymmetric key parser 'x509' registered Aug 13 00:46:55.842123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:46:55.842131 kernel: io scheduler mq-deadline registered Aug 13 00:46:55.842139 kernel: io scheduler kyber registered Aug 13 00:46:55.842147 kernel: io scheduler bfq registered Aug 13 00:46:55.842157 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:46:55.842166 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:46:55.842174 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:46:55.842182 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:46:55.842191 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:46:55.842218 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:46:55.842230 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:46:55.842240 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:46:55.842253 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:46:55.842431 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:46:55.842444 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:46:55.842559 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:46:55.842674 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:46:55 UTC (1755046015) Aug 13 00:46:55.842798 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:46:55.842810 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:46:55.842818 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:46:55.842826 kernel: Segment Routing with IPv6 Aug 13 00:46:55.842837 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:46:55.842846 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:46:55.842854 kernel: Key type dns_resolver registered Aug 13 00:46:55.842862 kernel: IPI shorthand broadcast: enabled Aug 13 00:46:55.842870 kernel: sched_clock: Marking stable (3419004161, 118789396)->(3584419886, -46626329) Aug 13 00:46:55.842878 kernel: registered taskstats version 1 Aug 13 00:46:55.842886 kernel: Loading compiled-in X.509 certificates Aug 13 00:46:55.842895 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:46:55.842903 kernel: Demotion targets for Node 0: null Aug 13 00:46:55.842913 kernel: Key type .fscrypt registered Aug 13 00:46:55.842921 kernel: Key type fscrypt-provisioning registered Aug 13 00:46:55.842929 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:46:55.842937 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:46:55.842945 kernel: ima: No architecture policies found Aug 13 00:46:55.842953 kernel: clk: Disabling unused clocks Aug 13 00:46:55.842961 kernel: Warning: unable to open an initial console. Aug 13 00:46:55.842969 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:46:55.842979 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:46:55.842988 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:46:55.842996 kernel: Run /init as init process Aug 13 00:46:55.843003 kernel: with arguments: Aug 13 00:46:55.843011 kernel: /init Aug 13 00:46:55.843019 kernel: with environment: Aug 13 00:46:55.843027 kernel: HOME=/ Aug 13 00:46:55.843035 kernel: TERM=linux Aug 13 00:46:55.843043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:46:55.843052 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:46:55.843066 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:55.843088 systemd[1]: Detected virtualization kvm. Aug 13 00:46:55.843096 systemd[1]: Detected architecture x86-64. Aug 13 00:46:55.843104 systemd[1]: Running in initrd. Aug 13 00:46:55.843113 systemd[1]: No hostname configured, using default hostname. Aug 13 00:46:55.843124 systemd[1]: Hostname set to . Aug 13 00:46:55.843133 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:46:55.843141 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:46:55.843150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:55.843159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:55.843168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:46:55.843177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:55.843186 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:46:55.843212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:46:55.843226 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:46:55.843239 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:46:55.843251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:55.843263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:55.843273 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:55.843285 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:55.843293 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:55.843302 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:55.843311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:55.843321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:55.843332 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:46:55.843343 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:46:55.843353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:55.843365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:55.843378 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:55.843389 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:55.843400 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:46:55.843410 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:55.843421 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:46:55.843433 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:46:55.843447 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:46:55.843458 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:55.843469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:55.843480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:55.843494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:55.843515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:55.843530 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:46:55.843545 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:55.843584 systemd-journald[220]: Collecting audit messages is disabled. Aug 13 00:46:55.843616 systemd-journald[220]: Journal started Aug 13 00:46:55.843648 systemd-journald[220]: Runtime Journal (/run/log/journal/64b2c7f4b6264b4280caad93b424c279) is 6M, max 48.6M, 42.5M free. Aug 13 00:46:55.833512 systemd-modules-load[221]: Inserted module 'overlay' Aug 13 00:46:55.890091 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:55.890128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:46:55.890145 kernel: Bridge firewalling registered Aug 13 00:46:55.864627 systemd-modules-load[221]: Inserted module 'br_netfilter' Aug 13 00:46:55.881469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:55.882381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:55.882923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:55.884423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:55.885478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:55.888323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:55.903055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:55.915009 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:46:55.918421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:55.921171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:55.922959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:55.925621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:55.928281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:46:55.931393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:55.969606 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:55.991520 systemd-resolved[262]: Positive Trust Anchors: Aug 13 00:46:55.991550 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:55.991592 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:55.994510 systemd-resolved[262]: Defaulting to hostname 'linux'. Aug 13 00:46:55.995807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:56.003523 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:56.087244 kernel: SCSI subsystem initialized Aug 13 00:46:56.096244 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:46:56.108231 kernel: iscsi: registered transport (tcp) Aug 13 00:46:56.134256 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:46:56.134290 kernel: QLogic iSCSI HBA Driver Aug 13 00:46:56.155592 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:56.177836 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:56.179334 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:56.245252 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:56.247370 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:46:56.353240 kernel: raid6: avx2x4 gen() 29499 MB/s Aug 13 00:46:56.370227 kernel: raid6: avx2x2 gen() 30130 MB/s Aug 13 00:46:56.387322 kernel: raid6: avx2x1 gen() 25546 MB/s Aug 13 00:46:56.387345 kernel: raid6: using algorithm avx2x2 gen() 30130 MB/s Aug 13 00:46:56.405244 kernel: raid6: .... xor() 19631 MB/s, rmw enabled Aug 13 00:46:56.405305 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:46:56.426225 kernel: xor: automatically using best checksumming function avx Aug 13 00:46:56.598244 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:46:56.606503 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:56.609442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:56.639597 systemd-udevd[471]: Using default interface naming scheme 'v255'. Aug 13 00:46:56.645143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:56.705894 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:46:56.741487 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Aug 13 00:46:56.769701 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:56.771051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:56.853799 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:56.878148 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:46:56.901231 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:46:56.906217 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 00:46:56.914246 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:46:56.916493 kernel: AES CTR mode by8 optimization enabled Aug 13 00:46:56.920566 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:46:56.937330 kernel: libata version 3.00 loaded. Aug 13 00:46:56.940522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:56.946904 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:46:56.946929 kernel: GPT:9289727 != 19775487 Aug 13 00:46:56.946952 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:46:56.946966 kernel: GPT:9289727 != 19775487 Aug 13 00:46:56.946979 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:46:56.946992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:56.940694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:56.948308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:56.951605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:56.954629 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:46:56.954834 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:46:56.954846 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:46:56.956940 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:46:56.957145 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:46:56.958191 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:56.963250 kernel: scsi host0: ahci Aug 13 00:46:56.975944 kernel: scsi host1: ahci Aug 13 00:46:56.976467 kernel: scsi host2: ahci Aug 13 00:46:56.976686 kernel: scsi host3: ahci Aug 13 00:46:56.981217 kernel: scsi host4: ahci Aug 13 00:46:56.989126 kernel: scsi host5: ahci Aug 13 00:46:56.989323 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Aug 13 00:46:56.989336 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Aug 13 00:46:56.989347 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Aug 13 00:46:56.989368 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Aug 13 00:46:56.989378 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Aug 13 00:46:56.989388 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Aug 13 00:46:57.005920 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:46:57.049327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:46:57.049673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:57.066721 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:46:57.066831 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:46:57.078968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:46:57.081588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:46:57.296399 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:57.296441 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:57.297255 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:57.297339 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:57.298230 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:46:57.299225 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:57.300244 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:46:57.300265 kernel: ata3.00: applying bridge limits Aug 13 00:46:57.301257 kernel: ata3.00: configured for UDMA/100 Aug 13 00:46:57.303226 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:46:57.350241 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:46:57.350539 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:46:57.376413 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:46:57.706188 disk-uuid[634]: Primary Header is updated. Aug 13 00:46:57.706188 disk-uuid[634]: Secondary Entries is updated. Aug 13 00:46:57.706188 disk-uuid[634]: Secondary Header is updated. Aug 13 00:46:57.709618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:57.788800 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:57.808137 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:57.809424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:57.811609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:57.814442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:46:57.844485 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:58.747271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:58.748391 disk-uuid[639]: The operation has completed successfully. Aug 13 00:46:58.783480 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:46:58.783602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:46:58.828277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:46:58.855400 sh[663]: Success Aug 13 00:46:58.877264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:46:58.877347 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:46:58.877363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:46:58.917267 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:46:58.953476 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:46:58.957663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:46:58.979477 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:46:58.987688 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:46:58.987734 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (675) Aug 13 00:46:58.989267 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:46:58.989296 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:58.991236 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:46:58.995546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:46:58.997826 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:59.000088 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:46:59.002764 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:46:59.005766 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:46:59.062476 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Aug 13 00:46:59.076480 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:59.076563 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:59.078739 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:59.107723 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:59.109484 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:46:59.112689 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:46:59.350175 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:59.367713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:59.460406 ignition[775]: Ignition 2.21.0 Aug 13 00:46:59.460429 ignition[775]: Stage: fetch-offline Aug 13 00:46:59.460471 ignition[775]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:59.460484 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:59.460597 ignition[775]: parsed url from cmdline: "" Aug 13 00:46:59.460602 ignition[775]: no config URL provided Aug 13 00:46:59.460608 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:59.460620 ignition[775]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:59.460662 ignition[775]: op(1): [started] loading QEMU firmware config module Aug 13 00:46:59.460668 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:46:59.482411 ignition[775]: op(1): [finished] loading QEMU firmware config module Aug 13 00:46:59.482469 ignition[775]: QEMU firmware config was not found. Ignoring... Aug 13 00:46:59.541356 ignition[775]: parsing config with SHA512: 8dc8ba348038f01f275d47e7428989733e2c40a6620c2a5ea283fd3b7e75d657f69b8b846c55086dad1ec2baddd81937e789ad9ec63c500255e37eb46cfb5ed3 Aug 13 00:46:59.549489 unknown[775]: fetched base config from "system" Aug 13 00:46:59.550587 unknown[775]: fetched user config from "qemu" Aug 13 00:46:59.551814 ignition[775]: fetch-offline: fetch-offline passed Aug 13 00:46:59.552785 ignition[775]: Ignition finished successfully Aug 13 00:46:59.671594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:59.733240 systemd-networkd[848]: lo: Link UP Aug 13 00:46:59.733257 systemd-networkd[848]: lo: Gained carrier Aug 13 00:46:59.735246 systemd-networkd[848]: Enumeration completed Aug 13 00:46:59.735887 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:59.735893 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:59.748446 systemd-networkd[848]: eth0: Link UP Aug 13 00:46:59.750232 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:59.750764 systemd-networkd[848]: eth0: Gained carrier Aug 13 00:46:59.750785 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:59.777903 systemd[1]: Reached target network.target - Network. Aug 13 00:46:59.795656 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:46:59.804777 systemd-networkd[848]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:46:59.806528 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:46:59.897673 ignition[857]: Ignition 2.21.0 Aug 13 00:46:59.897696 ignition[857]: Stage: kargs Aug 13 00:46:59.897858 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:59.897871 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:46:59.904012 ignition[857]: kargs: kargs passed Aug 13 00:46:59.904090 ignition[857]: Ignition finished successfully Aug 13 00:46:59.913427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:46:59.916003 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:47:00.004435 ignition[866]: Ignition 2.21.0 Aug 13 00:47:00.004456 ignition[866]: Stage: disks Aug 13 00:47:00.005994 ignition[866]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:47:00.006013 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:47:00.007427 ignition[866]: disks: disks passed Aug 13 00:47:00.007488 ignition[866]: Ignition finished successfully Aug 13 00:47:00.016427 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:47:00.020173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:47:00.021895 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:47:00.023343 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:47:00.023444 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:47:00.026826 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:47:00.031596 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:47:00.084468 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:47:00.100992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:47:00.103815 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:47:00.381907 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:47:00.385716 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:47:00.390013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:47:00.402615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:47:00.407626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:47:00.412114 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:47:00.412178 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:47:00.412236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:47:00.427153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:47:00.431013 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:47:00.438633 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Aug 13 00:47:00.441753 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:47:00.441834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:47:00.441849 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:47:00.449433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:47:00.525099 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:47:00.548440 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:47:00.562674 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:47:00.573782 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:47:00.881686 systemd-networkd[848]: eth0: Gained IPv6LL Aug 13 00:47:00.916724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:47:00.928498 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:47:00.940691 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:47:01.017663 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:47:01.018209 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:47:01.087511 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:47:01.173006 ignition[996]: INFO : Ignition 2.21.0 Aug 13 00:47:01.173006 ignition[996]: INFO : Stage: mount Aug 13 00:47:01.180078 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:47:01.180078 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:47:01.180078 ignition[996]: INFO : mount: mount passed Aug 13 00:47:01.180078 ignition[996]: INFO : Ignition finished successfully Aug 13 00:47:01.183619 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:47:01.200767 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:47:01.381496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:47:01.406248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Aug 13 00:47:01.408595 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:47:01.408624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:47:01.408639 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:47:01.413009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:47:01.452707 ignition[1027]: INFO : Ignition 2.21.0 Aug 13 00:47:01.452707 ignition[1027]: INFO : Stage: files Aug 13 00:47:01.455222 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:47:01.455222 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:47:01.455222 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:47:01.459588 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:47:01.459588 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:47:01.459588 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:47:01.459588 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:47:01.466301 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:47:01.466301 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:47:01.466301 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:47:01.460044 unknown[1027]: wrote ssh authorized keys file for user: core Aug 13 00:47:01.509941 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:47:02.054139 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:47:02.054139 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:47:02.083129 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:47:02.192823 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:47:02.245398 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:47:02.245398 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:47:02.318121 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:47:02.318121 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:47:02.324040 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:47:02.792111 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:47:03.314438 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:47:03.314438 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:47:03.319436 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:47:03.694586 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:47:03.694586 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:47:03.694586 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:47:03.738663 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:47:03.738663 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:47:03.738663 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:47:03.738663 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:47:03.756605 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:47:03.763505 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:47:03.765402 ignition[1027]: INFO : files: files passed Aug 13 00:47:03.765402 ignition[1027]: INFO : Ignition finished successfully Aug 13 00:47:03.767146 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:47:03.778237 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:47:03.779585 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:47:03.798814 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:47:03.798948 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:47:03.802752 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:47:03.806430 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:47:03.808484 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:47:03.811962 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:47:03.809360 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:47:03.811897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:47:03.814038 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:47:03.885594 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:47:03.886635 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:47:03.889631 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:47:03.889770 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:47:03.893420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:47:03.896316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:47:03.933969 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:47:03.935487 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:47:03.969318 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:47:03.969501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:47:03.984557 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:47:03.985671 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:47:03.985833 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:47:03.990316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:47:03.991458 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:47:03.993453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:47:03.994461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:47:03.996903 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:47:03.997257 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:47:03.997743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:47:03.998067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:47:03.998603 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:47:03.998953 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:47:04.009898 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:47:04.010860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:47:04.011023 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:47:04.015252 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:47:04.016541 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:47:04.017631 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:47:04.019753 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:47:04.020042 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:47:04.020214 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:47:04.025058 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:47:04.025249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:47:04.026333 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:47:04.028474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:47:04.031084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:47:04.032132 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:47:04.032601 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:47:04.038439 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:47:04.038576 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:47:04.039546 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:47:04.039640 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:47:04.041460 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:47:04.041623 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:47:04.043776 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:47:04.043932 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:47:04.052232 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:47:04.052333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:47:04.052484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:47:04.057523 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:47:04.058724 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:47:04.058898 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:47:04.061214 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:47:04.061370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:47:04.069825 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:47:04.075492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:47:04.107049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:47:04.150682 ignition[1082]: INFO : Ignition 2.21.0 Aug 13 00:47:04.150682 ignition[1082]: INFO : Stage: umount Aug 13 00:47:04.150682 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:47:04.150682 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:47:04.155317 ignition[1082]: INFO : umount: umount passed Aug 13 00:47:04.155317 ignition[1082]: INFO : Ignition finished successfully Aug 13 00:47:04.157189 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:47:04.157398 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:47:04.159586 systemd[1]: Stopped target network.target - Network. Aug 13 00:47:04.160367 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:47:04.160441 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:47:04.160772 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:47:04.160833 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:47:04.161122 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:47:04.161184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:47:04.161685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:47:04.161746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:47:04.162164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:47:04.169702 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:47:04.179244 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:47:04.179429 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:47:04.184778 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:47:04.185138 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:47:04.185215 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:47:04.189744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:47:04.191137 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:47:04.191305 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:47:04.195314 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:47:04.195522 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:47:04.197805 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:47:04.197860 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:47:04.201044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:47:04.203119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:47:04.203256 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:47:04.205345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:47:04.205416 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:04.208623 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:47:04.208694 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:47:04.209906 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:47:04.215598 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:47:04.230414 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:47:04.230680 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:47:04.232024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:47:04.232083 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:47:04.235461 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:47:04.235527 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:47:04.237626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:47:04.237686 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:47:04.239188 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:47:04.239263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:47:04.240095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:47:04.240145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:47:04.248736 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:47:04.252080 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:47:04.252181 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:47:04.256385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:47:04.257604 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:47:04.260279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:47:04.260344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:47:04.264337 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:47:04.273374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:47:04.284993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:47:04.285149 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:47:04.607541 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:47:04.607695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:47:04.610687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:47:04.612807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:47:04.612875 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:47:04.617121 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:47:04.639089 systemd[1]: Switching root. Aug 13 00:47:04.685638 systemd-journald[220]: Journal stopped Aug 13 00:47:07.431274 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Aug 13 00:47:07.431348 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:47:07.431366 kernel: SELinux: policy capability open_perms=1 Aug 13 00:47:07.431377 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:47:07.431389 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:47:07.431400 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:47:07.431420 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:47:07.431431 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:47:07.431444 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:47:07.431455 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:47:07.431471 kernel: audit: type=1403 audit(1755046026.197:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:47:07.431486 systemd[1]: Successfully loaded SELinux policy in 94.510ms. Aug 13 00:47:07.431514 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.444ms. Aug 13 00:47:07.431527 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:47:07.431542 systemd[1]: Detected virtualization kvm. Aug 13 00:47:07.431559 systemd[1]: Detected architecture x86-64. Aug 13 00:47:07.431571 systemd[1]: Detected first boot. Aug 13 00:47:07.431583 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:47:07.431595 zram_generator::config[1128]: No configuration found. Aug 13 00:47:07.431611 kernel: Guest personality initialized and is inactive Aug 13 00:47:07.431623 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:47:07.431634 kernel: Initialized host personality Aug 13 00:47:07.431650 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:47:07.431662 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:47:07.431675 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:47:07.431687 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:47:07.431699 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:47:07.431711 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:47:07.431726 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:47:07.431742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:47:07.431757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:47:07.431773 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:47:07.431787 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:47:07.431800 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:47:07.431812 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:47:07.431824 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:47:07.431836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:47:07.431852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:47:07.431865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:47:07.431877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:47:07.431890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:47:07.431902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:47:07.431915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:47:07.431927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:47:07.431942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:47:07.431954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:47:07.431966 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:47:07.431978 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:47:07.431990 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:47:07.432002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:47:07.432015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:47:07.432027 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:47:07.432040 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:47:07.432052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:47:07.432067 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:47:07.432079 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:47:07.432091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:47:07.432103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:47:07.432115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:47:07.432127 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:47:07.432139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:47:07.432151 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:47:07.432163 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:47:07.432182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:07.432194 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:47:07.432307 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:47:07.432324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:47:07.432340 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:47:07.432355 systemd[1]: Reached target machines.target - Containers. Aug 13 00:47:07.432370 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:47:07.432385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:47:07.432413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:47:07.432429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:47:07.432444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:47:07.432459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:47:07.432474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:47:07.432489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:47:07.432504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:47:07.432520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:47:07.432538 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:47:07.432553 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:47:07.432568 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:47:07.432583 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:47:07.432598 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:47:07.432616 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:47:07.432630 kernel: loop: module loaded Aug 13 00:47:07.432645 kernel: fuse: init (API version 7.41) Aug 13 00:47:07.432659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:47:07.432677 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:47:07.432692 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:47:07.432708 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:47:07.432723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:47:07.432739 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:47:07.432756 kernel: ACPI: bus type drm_connector registered Aug 13 00:47:07.432770 systemd[1]: Stopped verity-setup.service. Aug 13 00:47:07.432786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:07.432802 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:47:07.432817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:47:07.432832 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:47:07.432847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:47:07.432862 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:47:07.432877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:47:07.432895 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:47:07.432910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:47:07.432925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:47:07.432966 systemd-journald[1192]: Collecting audit messages is disabled. Aug 13 00:47:07.432999 systemd-journald[1192]: Journal started Aug 13 00:47:07.433027 systemd-journald[1192]: Runtime Journal (/run/log/journal/64b2c7f4b6264b4280caad93b424c279) is 6M, max 48.6M, 42.5M free. Aug 13 00:47:06.960949 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:47:06.973546 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:47:06.974035 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:47:07.435582 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:47:07.436769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:47:07.437003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:47:07.438587 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:47:07.438808 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:47:07.440321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:47:07.440592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:47:07.442176 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:47:07.442460 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:47:07.443873 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:47:07.444089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:47:07.445572 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:47:07.447047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:47:07.448875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:47:07.450602 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:47:07.469486 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:47:07.510304 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:47:07.512808 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:47:07.514053 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:47:07.514090 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:47:07.516551 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:47:07.532271 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:47:07.533660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:47:07.535162 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:47:07.537749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:47:07.539101 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:47:07.540442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:47:07.541826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:47:07.545330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:47:07.547942 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:47:07.551812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:47:07.553470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:47:07.554989 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:47:07.594086 systemd-journald[1192]: Time spent on flushing to /var/log/journal/64b2c7f4b6264b4280caad93b424c279 is 21.307ms for 975 entries. Aug 13 00:47:07.594086 systemd-journald[1192]: System Journal (/var/log/journal/64b2c7f4b6264b4280caad93b424c279) is 8M, max 195.6M, 187.6M free. Aug 13 00:47:07.704679 systemd-journald[1192]: Received client request to flush runtime journal. Aug 13 00:47:07.704740 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 00:47:07.704774 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:47:07.704800 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 00:47:07.611563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:47:07.614515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:07.617757 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:47:07.625607 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:47:07.652292 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:47:07.700410 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:47:07.707923 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:47:07.723436 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:47:07.735545 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 00:47:07.761901 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:47:07.765174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:47:07.770232 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 00:47:07.811250 kernel: loop4: detected capacity change from 0 to 229808 Aug 13 00:47:07.833225 kernel: loop5: detected capacity change from 0 to 146240 Aug 13 00:47:07.844592 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:47:07.844612 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:47:07.848035 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:47:07.849238 (sd-merge)[1267]: Merged extensions into '/usr'. Aug 13 00:47:07.854143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:47:07.856511 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:47:07.856669 systemd[1]: Reloading... Aug 13 00:47:07.947236 zram_generator::config[1295]: No configuration found. Aug 13 00:47:08.101801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:08.151743 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:47:08.192612 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:47:08.192771 systemd[1]: Reloading finished in 335 ms. Aug 13 00:47:08.228053 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:47:08.229805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:47:08.251456 systemd[1]: Starting ensure-sysext.service... Aug 13 00:47:08.254901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:47:08.293493 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:47:08.293510 systemd[1]: Reloading... Aug 13 00:47:08.324430 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:47:08.324481 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:47:08.324906 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:47:08.325283 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:47:08.326407 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:47:08.326760 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Aug 13 00:47:08.326843 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Aug 13 00:47:08.333818 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:47:08.334694 systemd-tmpfiles[1333]: Skipping /boot Aug 13 00:47:08.362629 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:47:08.362838 systemd-tmpfiles[1333]: Skipping /boot Aug 13 00:47:08.364227 zram_generator::config[1366]: No configuration found. Aug 13 00:47:08.481180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:08.566514 systemd[1]: Reloading finished in 272 ms. Aug 13 00:47:08.587363 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:47:08.627021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:47:08.637836 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:47:08.640914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:47:08.643996 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:47:08.668973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:47:08.673303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:47:08.676298 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:47:08.683455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.683794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:47:08.686643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:47:08.691563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:47:08.694624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:47:08.696019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:47:08.697742 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:47:08.703221 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:47:08.704584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.707282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:47:08.709558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:47:08.709819 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:47:08.712315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:47:08.714542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:47:08.717033 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:47:08.717324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:47:08.729747 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Aug 13 00:47:08.733740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.734053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:47:08.737469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:47:08.742910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:47:08.746635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:47:08.749539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:47:08.749774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:47:08.759841 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:47:08.763625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.766422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:47:08.775863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:47:08.776631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:47:08.779414 augenrules[1441]: No rules Aug 13 00:47:08.779864 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:47:08.782616 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:47:08.784038 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:47:08.785997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:47:08.788645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:47:08.790650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:47:08.792966 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:47:08.793245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:47:08.795350 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:47:08.804448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:47:08.832921 systemd[1]: Finished ensure-sysext.service. Aug 13 00:47:08.836684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.842663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:47:08.844045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:47:08.849354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:47:08.857011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:47:08.859216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:47:08.862421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:47:08.863606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:47:08.863653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:47:08.865605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:47:08.870579 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:47:08.871914 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:47:08.871959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:47:08.872716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:47:08.872994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:47:08.943376 augenrules[1480]: /sbin/augenrules: No change Aug 13 00:47:08.895412 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:47:08.943652 augenrules[1507]: No rules Aug 13 00:47:08.896182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:47:08.903076 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:47:08.903499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:47:08.905148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:47:08.930189 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:47:08.933295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:47:08.948045 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:47:08.948942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:47:08.949179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:47:08.951511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:47:08.967645 systemd-resolved[1402]: Positive Trust Anchors: Aug 13 00:47:08.968051 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:47:08.968132 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:47:08.973673 systemd-resolved[1402]: Defaulting to hostname 'linux'. Aug 13 00:47:08.976109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:47:08.977661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:47:09.062649 systemd-networkd[1485]: lo: Link UP Aug 13 00:47:09.063023 systemd-networkd[1485]: lo: Gained carrier Aug 13 00:47:09.064775 systemd-networkd[1485]: Enumeration completed Aug 13 00:47:09.065135 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:47:09.066568 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:47:09.067869 systemd[1]: Reached target network.target - Network. Aug 13 00:47:09.068931 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:47:09.070110 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:47:09.071479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:47:09.073448 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:47:09.073634 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:47:09.074791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:47:09.076088 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:47:09.076110 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:47:09.077051 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:47:09.078329 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:47:09.079513 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:47:09.079752 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:47:09.079759 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:47:09.080587 systemd-networkd[1485]: eth0: Link UP Aug 13 00:47:09.080867 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:47:09.081661 systemd-networkd[1485]: eth0: Gained carrier Aug 13 00:47:09.081678 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:47:09.083304 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:47:09.087038 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:47:09.091716 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:47:09.093422 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:47:09.095295 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:47:09.096265 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:47:09.096334 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Aug 13 00:47:09.671538 systemd-resolved[1402]: Clock change detected. Flushing caches. Aug 13 00:47:09.671732 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:47:09.671785 systemd-timesyncd[1487]: Initial clock synchronization to Wed 2025-08-13 00:47:09.671454 UTC. Aug 13 00:47:09.685392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:47:09.689787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:47:09.691901 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:47:09.695525 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:47:09.696187 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:47:09.699638 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:47:09.701678 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:47:09.724258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:47:09.726828 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:47:09.727843 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:47:09.728943 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:47:09.728968 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:47:09.730541 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:47:09.733028 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:47:09.736012 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:47:09.736303 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:47:09.737626 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:47:09.788260 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:47:09.798464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:47:09.799608 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:47:09.800933 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:47:09.805579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:47:09.806645 jq[1545]: false Aug 13 00:47:09.807959 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:47:09.813356 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:47:09.816177 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:47:09.820460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:47:09.827561 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:47:09.829638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:47:09.835689 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:47:09.836818 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing passwd entry cache Aug 13 00:47:09.836842 oslogin_cache_refresh[1550]: Refreshing passwd entry cache Aug 13 00:47:09.840466 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:47:09.845780 extend-filesystems[1549]: Found /dev/vda6 Aug 13 00:47:09.870530 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:47:09.871590 extend-filesystems[1549]: Found /dev/vda9 Aug 13 00:47:09.874603 extend-filesystems[1549]: Checking size of /dev/vda9 Aug 13 00:47:09.876454 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting users, quitting Aug 13 00:47:09.876454 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:47:09.876454 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing group entry cache Aug 13 00:47:09.875944 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:47:09.875860 oslogin_cache_refresh[1550]: Failure getting users, quitting Aug 13 00:47:09.875885 oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:47:09.875950 oslogin_cache_refresh[1550]: Refreshing group entry cache Aug 13 00:47:09.879738 update_engine[1565]: I20250813 00:47:09.879657 1565 main.cc:92] Flatcar Update Engine starting Aug 13 00:47:09.880608 jq[1568]: true Aug 13 00:47:09.880914 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:47:09.882674 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:47:09.884745 oslogin_cache_refresh[1550]: Failure getting groups, quitting Aug 13 00:47:09.886067 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting groups, quitting Aug 13 00:47:09.886067 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:47:09.882916 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:47:09.884762 oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:47:09.883230 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:47:09.883482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:47:09.886993 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:47:09.887295 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:47:09.935375 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:47:09.938576 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:47:09.940803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:47:09.978272 tar[1576]: linux-amd64/LICENSE Aug 13 00:47:09.980536 tar[1576]: linux-amd64/helm Aug 13 00:47:09.984671 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:47:09.984769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:47:10.009827 jq[1578]: true Aug 13 00:47:10.065112 dbus-daemon[1543]: [system] SELinux support is enabled Aug 13 00:47:10.065307 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:47:10.069483 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:47:10.069514 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:47:10.069989 extend-filesystems[1549]: Resized partition /dev/vda9 Aug 13 00:47:10.072683 update_engine[1565]: I20250813 00:47:10.071579 1565 update_check_scheduler.cc:74] Next update check in 11m26s Aug 13 00:47:10.071785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:47:10.071803 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:47:10.074646 extend-filesystems[1598]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:47:10.076335 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:47:10.080763 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:47:10.190296 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:47:10.190346 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:47:10.201347 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:47:10.214504 systemd-logind[1558]: New seat seat0. Aug 13 00:47:10.217336 kernel: kvm_amd: TSC scaling supported Aug 13 00:47:10.217365 kernel: kvm_amd: Nested Virtualization enabled Aug 13 00:47:10.307505 kernel: kvm_amd: Nested Paging enabled Aug 13 00:47:10.307545 kernel: kvm_amd: LBR virtualization supported Aug 13 00:47:10.307559 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 00:47:10.307572 kernel: kvm_amd: Virtual GIF supported Aug 13 00:47:10.221632 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:47:10.310341 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:47:10.359556 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:47:10.361902 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:47:10.401040 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:47:10.401342 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:47:10.403734 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:47:10.414093 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:47:10.425352 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:47:10.449389 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:47:10.451431 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:47:10.456739 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:47:10.457109 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:47:10.573360 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:47:10.691990 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:47:10.691990 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:47:10.691990 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:47:10.701155 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Aug 13 00:47:10.695847 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:47:10.697115 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:47:10.715895 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:47:10.721848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:47:10.725534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:47:10.730697 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:47:10.778035 containerd[1579]: time="2025-08-13T00:47:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:47:10.784337 containerd[1579]: time="2025-08-13T00:47:10.782182814Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:47:10.793375 containerd[1579]: time="2025-08-13T00:47:10.793073915Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.79µs" Aug 13 00:47:10.794360 containerd[1579]: time="2025-08-13T00:47:10.794305343Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:47:10.794467 containerd[1579]: time="2025-08-13T00:47:10.794439565Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:47:10.794750 containerd[1579]: time="2025-08-13T00:47:10.794727485Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:47:10.794815 containerd[1579]: time="2025-08-13T00:47:10.794801734Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:47:10.794877 containerd[1579]: time="2025-08-13T00:47:10.794865233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795003 containerd[1579]: time="2025-08-13T00:47:10.794985328Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795068 containerd[1579]: time="2025-08-13T00:47:10.795054438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795412 containerd[1579]: time="2025-08-13T00:47:10.795388805Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795480 containerd[1579]: time="2025-08-13T00:47:10.795466120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795530 containerd[1579]: time="2025-08-13T00:47:10.795518017Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795576 containerd[1579]: time="2025-08-13T00:47:10.795563983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:47:10.795765 containerd[1579]: time="2025-08-13T00:47:10.795746335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:47:10.796108 containerd[1579]: time="2025-08-13T00:47:10.796087926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:47:10.796185 containerd[1579]: time="2025-08-13T00:47:10.796169529Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:47:10.796231 containerd[1579]: time="2025-08-13T00:47:10.796219513Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:47:10.796306 containerd[1579]: time="2025-08-13T00:47:10.796292529Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:47:10.796725 containerd[1579]: time="2025-08-13T00:47:10.796702438Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:47:10.796852 containerd[1579]: time="2025-08-13T00:47:10.796835337Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:47:10.802224 containerd[1579]: time="2025-08-13T00:47:10.802198944Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:47:10.802308 containerd[1579]: time="2025-08-13T00:47:10.802294403Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:47:10.802428 containerd[1579]: time="2025-08-13T00:47:10.802413216Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:47:10.802508 containerd[1579]: time="2025-08-13T00:47:10.802493657Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:47:10.802589 containerd[1579]: time="2025-08-13T00:47:10.802573466Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:47:10.802642 containerd[1579]: time="2025-08-13T00:47:10.802630493Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:47:10.802695 containerd[1579]: time="2025-08-13T00:47:10.802682871Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:47:10.802764 containerd[1579]: time="2025-08-13T00:47:10.802749226Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:47:10.802813 containerd[1579]: time="2025-08-13T00:47:10.802801544Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:47:10.802861 containerd[1579]: time="2025-08-13T00:47:10.802850065Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:47:10.802907 containerd[1579]: time="2025-08-13T00:47:10.802896211Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:47:10.802957 containerd[1579]: time="2025-08-13T00:47:10.802944772Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:47:10.803142 containerd[1579]: time="2025-08-13T00:47:10.803115082Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:47:10.803250 containerd[1579]: time="2025-08-13T00:47:10.803227272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:47:10.803345 containerd[1579]: time="2025-08-13T00:47:10.803310418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:47:10.803402 containerd[1579]: time="2025-08-13T00:47:10.803389576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:47:10.803482 containerd[1579]: time="2025-08-13T00:47:10.803461832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:47:10.803551 containerd[1579]: time="2025-08-13T00:47:10.803534298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:47:10.803644 containerd[1579]: time="2025-08-13T00:47:10.803622583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:47:10.803728 containerd[1579]: time="2025-08-13T00:47:10.803708875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:47:10.803795 containerd[1579]: time="2025-08-13T00:47:10.803780740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:47:10.803858 containerd[1579]: time="2025-08-13T00:47:10.803840051Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:47:10.803926 containerd[1579]: time="2025-08-13T00:47:10.803911044Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:47:10.804064 containerd[1579]: time="2025-08-13T00:47:10.804042160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:47:10.804148 containerd[1579]: time="2025-08-13T00:47:10.804133471Z" level=info msg="Start snapshots syncer" Aug 13 00:47:10.804227 containerd[1579]: time="2025-08-13T00:47:10.804213271Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:47:10.804680 containerd[1579]: time="2025-08-13T00:47:10.804633178Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:47:10.804955 containerd[1579]: time="2025-08-13T00:47:10.804932139Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:47:10.806636 containerd[1579]: time="2025-08-13T00:47:10.806598363Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:47:10.807053 containerd[1579]: time="2025-08-13T00:47:10.807033970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:47:10.807119 containerd[1579]: time="2025-08-13T00:47:10.807106095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:47:10.807168 containerd[1579]: time="2025-08-13T00:47:10.807156239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:47:10.807217 containerd[1579]: time="2025-08-13T00:47:10.807205571Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:47:10.807268 containerd[1579]: time="2025-08-13T00:47:10.807255956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:47:10.807339 containerd[1579]: time="2025-08-13T00:47:10.807305118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:47:10.807394 containerd[1579]: time="2025-08-13T00:47:10.807379768Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:47:10.807493 containerd[1579]: time="2025-08-13T00:47:10.807477642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:47:10.807546 containerd[1579]: time="2025-08-13T00:47:10.807534328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:47:10.807638 containerd[1579]: time="2025-08-13T00:47:10.807601874Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:47:10.807744 containerd[1579]: time="2025-08-13T00:47:10.807726528Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:47:10.807866 containerd[1579]: time="2025-08-13T00:47:10.807848266Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:47:10.807915 containerd[1579]: time="2025-08-13T00:47:10.807903189Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:47:10.807964 containerd[1579]: time="2025-08-13T00:47:10.807951340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:47:10.808008 containerd[1579]: time="2025-08-13T00:47:10.807996625Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:47:10.808055 containerd[1579]: time="2025-08-13T00:47:10.808043332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:47:10.808103 containerd[1579]: time="2025-08-13T00:47:10.808091513Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:47:10.808170 containerd[1579]: time="2025-08-13T00:47:10.808157216Z" level=info msg="runtime interface created" Aug 13 00:47:10.808213 containerd[1579]: time="2025-08-13T00:47:10.808202280Z" level=info msg="created NRI interface" Aug 13 00:47:10.808259 containerd[1579]: time="2025-08-13T00:47:10.808247936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:47:10.808306 containerd[1579]: time="2025-08-13T00:47:10.808295565Z" level=info msg="Connect containerd service" Aug 13 00:47:10.808393 containerd[1579]: time="2025-08-13T00:47:10.808379963Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:47:10.809518 containerd[1579]: time="2025-08-13T00:47:10.809495275Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:47:10.888544 tar[1576]: linux-amd64/README.md Aug 13 00:47:10.919943 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:47:11.092650 containerd[1579]: time="2025-08-13T00:47:11.092569789Z" level=info msg="Start subscribing containerd event" Aug 13 00:47:11.092798 containerd[1579]: time="2025-08-13T00:47:11.092678132Z" level=info msg="Start recovering state" Aug 13 00:47:11.092894 containerd[1579]: time="2025-08-13T00:47:11.092875232Z" level=info msg="Start event monitor" Aug 13 00:47:11.092931 containerd[1579]: time="2025-08-13T00:47:11.092883417Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:47:11.092931 containerd[1579]: time="2025-08-13T00:47:11.092909466Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:47:11.092999 containerd[1579]: time="2025-08-13T00:47:11.092940154Z" level=info msg="Start streaming server" Aug 13 00:47:11.092999 containerd[1579]: time="2025-08-13T00:47:11.092959740Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:47:11.092999 containerd[1579]: time="2025-08-13T00:47:11.092970330Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:47:11.093063 containerd[1579]: time="2025-08-13T00:47:11.092969649Z" level=info msg="runtime interface starting up..." Aug 13 00:47:11.093063 containerd[1579]: time="2025-08-13T00:47:11.093057594Z" level=info msg="starting plugins..." Aug 13 00:47:11.093099 containerd[1579]: time="2025-08-13T00:47:11.093082420Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:47:11.093256 containerd[1579]: time="2025-08-13T00:47:11.093230678Z" level=info msg="containerd successfully booted in 0.315822s" Aug 13 00:47:11.093412 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:47:11.148242 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:47:11.151217 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:53850.service - OpenSSH per-connection server daemon (10.0.0.1:53850). Aug 13 00:47:11.240040 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 53850 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:11.243382 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:11.250456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:47:11.252794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:47:11.261148 systemd-logind[1558]: New session 1 of user core. Aug 13 00:47:11.389694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:47:11.394491 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:47:11.421261 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:11.425270 systemd-logind[1558]: New session c1 of user core. Aug 13 00:47:11.632199 systemd[1673]: Queued start job for default target default.target. Aug 13 00:47:11.651523 systemd[1673]: Created slice app.slice - User Application Slice. Aug 13 00:47:11.651560 systemd[1673]: Reached target paths.target - Paths. Aug 13 00:47:11.651614 systemd[1673]: Reached target timers.target - Timers. Aug 13 00:47:11.653681 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:47:11.667594 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:47:11.667730 systemd[1673]: Reached target sockets.target - Sockets. Aug 13 00:47:11.667768 systemd[1673]: Reached target basic.target - Basic System. Aug 13 00:47:11.667808 systemd[1673]: Reached target default.target - Main User Target. Aug 13 00:47:11.667841 systemd[1673]: Startup finished in 233ms. Aug 13 00:47:11.668468 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:47:11.672206 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:47:11.695804 systemd-networkd[1485]: eth0: Gained IPv6LL Aug 13 00:47:11.699365 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:47:11.701374 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:47:11.704363 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:47:11.707090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:11.710225 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:47:11.747201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:47:11.756912 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:53858.service - OpenSSH per-connection server daemon (10.0.0.1:53858). Aug 13 00:47:11.769037 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:47:11.769567 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:47:11.771883 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:47:11.809954 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 53858 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:11.811557 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:11.816649 systemd-logind[1558]: New session 2 of user core. Aug 13 00:47:11.827495 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:47:11.891277 sshd[1704]: Connection closed by 10.0.0.1 port 53858 Aug 13 00:47:11.891912 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:11.925103 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:53858.service: Deactivated successfully. Aug 13 00:47:11.927161 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:47:11.927908 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:47:11.930674 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:53866.service - OpenSSH per-connection server daemon (10.0.0.1:53866). Aug 13 00:47:12.012440 systemd-logind[1558]: Removed session 2. Aug 13 00:47:12.065627 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 53866 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:12.067068 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:12.071298 systemd-logind[1558]: New session 3 of user core. Aug 13 00:47:12.082488 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:47:12.138857 sshd[1712]: Connection closed by 10.0.0.1 port 53866 Aug 13 00:47:12.139235 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:12.144251 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:53866.service: Deactivated successfully. Aug 13 00:47:12.146406 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:47:12.147105 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:47:12.148721 systemd-logind[1558]: Removed session 3. Aug 13 00:47:13.933505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:13.955564 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:47:13.957677 systemd[1]: Startup finished in 3.491s (kernel) + 10.519s (initrd) + 7.279s (userspace) = 21.290s. Aug 13 00:47:13.986819 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:15.375570 kubelet[1722]: E0813 00:47:15.375488 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:15.380510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:15.380724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:15.381127 systemd[1]: kubelet.service: Consumed 2.063s CPU time, 268.4M memory peak. Aug 13 00:47:22.157037 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:55004.service - OpenSSH per-connection server daemon (10.0.0.1:55004). Aug 13 00:47:22.226844 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 55004 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:22.228688 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.233593 systemd-logind[1558]: New session 4 of user core. Aug 13 00:47:22.243504 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:47:22.298765 sshd[1737]: Connection closed by 10.0.0.1 port 55004 Aug 13 00:47:22.299189 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:22.309396 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:55004.service: Deactivated successfully. Aug 13 00:47:22.311486 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:47:22.312334 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:47:22.315388 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:55020.service - OpenSSH per-connection server daemon (10.0.0.1:55020). Aug 13 00:47:22.316036 systemd-logind[1558]: Removed session 4. Aug 13 00:47:22.370896 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 55020 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:22.372578 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.377425 systemd-logind[1558]: New session 5 of user core. Aug 13 00:47:22.389485 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:47:22.440309 sshd[1745]: Connection closed by 10.0.0.1 port 55020 Aug 13 00:47:22.440586 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:22.452955 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:55020.service: Deactivated successfully. Aug 13 00:47:22.454998 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:47:22.455780 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:47:22.458878 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:55022.service - OpenSSH per-connection server daemon (10.0.0.1:55022). Aug 13 00:47:22.459520 systemd-logind[1558]: Removed session 5. Aug 13 00:47:22.522639 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 55022 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:22.524170 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.528764 systemd-logind[1558]: New session 6 of user core. Aug 13 00:47:22.550479 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:47:22.605938 sshd[1753]: Connection closed by 10.0.0.1 port 55022 Aug 13 00:47:22.606404 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:22.619144 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:55022.service: Deactivated successfully. Aug 13 00:47:22.621098 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:47:22.621960 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:47:22.625153 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:55026.service - OpenSSH per-connection server daemon (10.0.0.1:55026). Aug 13 00:47:22.626045 systemd-logind[1558]: Removed session 6. Aug 13 00:47:22.690685 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 55026 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:22.692399 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.697658 systemd-logind[1558]: New session 7 of user core. Aug 13 00:47:22.713466 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:47:22.834452 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:47:22.834773 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:22.851428 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:22.853045 sshd[1761]: Connection closed by 10.0.0.1 port 55026 Aug 13 00:47:22.853394 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:22.871791 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:55026.service: Deactivated successfully. Aug 13 00:47:22.873798 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:47:22.874612 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:47:22.877728 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). Aug 13 00:47:22.878372 systemd-logind[1558]: Removed session 7. Aug 13 00:47:22.935968 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:22.937773 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.942817 systemd-logind[1558]: New session 8 of user core. Aug 13 00:47:22.953456 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:47:23.008461 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:47:23.008791 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:23.266512 sudo[1772]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:23.273727 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:47:23.274064 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:23.284917 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:47:23.336754 augenrules[1794]: No rules Aug 13 00:47:23.338835 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:47:23.339152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:47:23.340536 sudo[1771]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:23.342178 sshd[1770]: Connection closed by 10.0.0.1 port 55040 Aug 13 00:47:23.342611 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:23.356565 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:55040.service: Deactivated successfully. Aug 13 00:47:23.358399 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:47:23.359296 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:47:23.362307 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:55046.service - OpenSSH per-connection server daemon (10.0.0.1:55046). Aug 13 00:47:23.363395 systemd-logind[1558]: Removed session 8. Aug 13 00:47:23.421939 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 55046 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:47:23.423628 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:23.428514 systemd-logind[1558]: New session 9 of user core. Aug 13 00:47:23.438461 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:47:23.491861 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:47:23.492181 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:24.141655 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:47:24.171720 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:47:24.690270 dockerd[1826]: time="2025-08-13T00:47:24.690160899Z" level=info msg="Starting up" Aug 13 00:47:24.692363 dockerd[1826]: time="2025-08-13T00:47:24.692339163Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:47:25.073399 dockerd[1826]: time="2025-08-13T00:47:25.073335672Z" level=info msg="Loading containers: start." Aug 13 00:47:25.085344 kernel: Initializing XFRM netlink socket Aug 13 00:47:25.379871 systemd-networkd[1485]: docker0: Link UP Aug 13 00:47:25.386301 dockerd[1826]: time="2025-08-13T00:47:25.386247872Z" level=info msg="Loading containers: done." Aug 13 00:47:25.459064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:47:25.461792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:25.553331 dockerd[1826]: time="2025-08-13T00:47:25.553263847Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:47:25.553499 dockerd[1826]: time="2025-08-13T00:47:25.553398269Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:47:25.553552 dockerd[1826]: time="2025-08-13T00:47:25.553531929Z" level=info msg="Initializing buildkit" Aug 13 00:47:25.834690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:25.840341 (kubelet)[2029]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:26.731392 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1002424805 wd_nsec: 1002424717 Aug 13 00:47:26.749162 dockerd[1826]: time="2025-08-13T00:47:26.749073189Z" level=info msg="Completed buildkit initialization" Aug 13 00:47:26.756347 dockerd[1826]: time="2025-08-13T00:47:26.755985520Z" level=info msg="Daemon has completed initialization" Aug 13 00:47:26.756347 dockerd[1826]: time="2025-08-13T00:47:26.756083353Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:47:26.757554 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:47:26.795281 kubelet[2029]: E0813 00:47:26.795191 2029 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:26.807482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:26.807690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:26.808190 systemd[1]: kubelet.service: Consumed 1.183s CPU time, 111.5M memory peak. Aug 13 00:47:28.037534 containerd[1579]: time="2025-08-13T00:47:28.037463210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:47:30.479933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613663521.mount: Deactivated successfully. Aug 13 00:47:32.697872 containerd[1579]: time="2025-08-13T00:47:32.697774352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.710047 containerd[1579]: time="2025-08-13T00:47:32.709993723Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 00:47:32.732197 containerd[1579]: time="2025-08-13T00:47:32.732131792Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.754104 containerd[1579]: time="2025-08-13T00:47:32.754057393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:32.755358 containerd[1579]: time="2025-08-13T00:47:32.755301345Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 4.717765158s" Aug 13 00:47:32.755477 containerd[1579]: time="2025-08-13T00:47:32.755455204Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:47:32.758433 containerd[1579]: time="2025-08-13T00:47:32.758391639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:47:34.844402 containerd[1579]: time="2025-08-13T00:47:34.844278186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:34.883127 containerd[1579]: time="2025-08-13T00:47:34.882992956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 00:47:34.910389 containerd[1579]: time="2025-08-13T00:47:34.910289237Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:35.005275 containerd[1579]: time="2025-08-13T00:47:35.005184045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:35.006881 containerd[1579]: time="2025-08-13T00:47:35.006817828Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 2.248382396s" Aug 13 00:47:35.006881 containerd[1579]: time="2025-08-13T00:47:35.006854457Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:47:35.007639 containerd[1579]: time="2025-08-13T00:47:35.007581079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:47:36.959030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:47:36.961221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:37.214341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:37.233681 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:37.294084 kubelet[2120]: E0813 00:47:37.293981 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:37.298517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:37.298734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:37.299118 systemd[1]: kubelet.service: Consumed 282ms CPU time, 110.6M memory peak. Aug 13 00:47:40.258348 containerd[1579]: time="2025-08-13T00:47:40.258275139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:40.259301 containerd[1579]: time="2025-08-13T00:47:40.259248584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 00:47:40.261725 containerd[1579]: time="2025-08-13T00:47:40.261684852Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:40.264654 containerd[1579]: time="2025-08-13T00:47:40.264569992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:40.265689 containerd[1579]: time="2025-08-13T00:47:40.265637013Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 5.258003726s" Aug 13 00:47:40.265689 containerd[1579]: time="2025-08-13T00:47:40.265682528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:47:40.266573 containerd[1579]: time="2025-08-13T00:47:40.266306939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:47:42.337253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560453009.mount: Deactivated successfully. Aug 13 00:47:43.142572 containerd[1579]: time="2025-08-13T00:47:43.142513613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:43.143291 containerd[1579]: time="2025-08-13T00:47:43.143264165Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 00:47:43.144705 containerd[1579]: time="2025-08-13T00:47:43.144613659Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:43.146497 containerd[1579]: time="2025-08-13T00:47:43.146456161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:43.146949 containerd[1579]: time="2025-08-13T00:47:43.146910955Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.880552038s" Aug 13 00:47:43.146949 containerd[1579]: time="2025-08-13T00:47:43.146941664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:47:43.147438 containerd[1579]: time="2025-08-13T00:47:43.147414933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:47:43.751355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668323913.mount: Deactivated successfully. Aug 13 00:47:45.325221 containerd[1579]: time="2025-08-13T00:47:45.325134067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:45.325878 containerd[1579]: time="2025-08-13T00:47:45.325809953Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 00:47:45.327545 containerd[1579]: time="2025-08-13T00:47:45.327505242Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:45.332405 containerd[1579]: time="2025-08-13T00:47:45.331907338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:45.335052 containerd[1579]: time="2025-08-13T00:47:45.334990698Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.187550075s" Aug 13 00:47:45.335052 containerd[1579]: time="2025-08-13T00:47:45.335046665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:47:45.335639 containerd[1579]: time="2025-08-13T00:47:45.335595978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:47:46.540965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201943632.mount: Deactivated successfully. Aug 13 00:47:46.548198 containerd[1579]: time="2025-08-13T00:47:46.548122827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:46.548972 containerd[1579]: time="2025-08-13T00:47:46.548931124Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:47:46.550400 containerd[1579]: time="2025-08-13T00:47:46.550300695Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:46.552535 containerd[1579]: time="2025-08-13T00:47:46.552491949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:46.553184 containerd[1579]: time="2025-08-13T00:47:46.553127265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.217490679s" Aug 13 00:47:46.553184 containerd[1579]: time="2025-08-13T00:47:46.553169816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:47:46.554097 containerd[1579]: time="2025-08-13T00:47:46.553841392Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:47:46.984856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642298928.mount: Deactivated successfully. Aug 13 00:47:47.459047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:47:47.462712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:48.177528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:48.192032 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:48.254875 kubelet[2249]: E0813 00:47:48.254794 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:48.259686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:48.259903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:48.260284 systemd[1]: kubelet.service: Consumed 253ms CPU time, 109M memory peak. Aug 13 00:47:50.883541 containerd[1579]: time="2025-08-13T00:47:50.883423183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:50.884790 containerd[1579]: time="2025-08-13T00:47:50.884735302Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 00:47:50.887748 containerd[1579]: time="2025-08-13T00:47:50.887696504Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:50.891005 containerd[1579]: time="2025-08-13T00:47:50.890921167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:50.892144 containerd[1579]: time="2025-08-13T00:47:50.892104642Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.338225848s" Aug 13 00:47:50.892144 containerd[1579]: time="2025-08-13T00:47:50.892138306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:47:54.792652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:54.792848 systemd[1]: kubelet.service: Consumed 253ms CPU time, 109M memory peak. Aug 13 00:47:54.795884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:54.824940 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-9.scope)... Aug 13 00:47:54.824955 systemd[1]: Reloading... Aug 13 00:47:54.923394 zram_generator::config[2348]: No configuration found. Aug 13 00:47:55.211378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:55.336715 systemd[1]: Reloading finished in 511 ms. Aug 13 00:47:55.413229 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:47:55.413390 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:47:55.413792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:55.413851 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.2M memory peak. Aug 13 00:47:55.415908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:55.602729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:55.615868 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:55.650531 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:55.650531 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:55.650531 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:55.651058 kubelet[2392]: I0813 00:47:55.650611 2392 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:55.698172 update_engine[1565]: I20250813 00:47:55.698025 1565 update_attempter.cc:509] Updating boot flags... Aug 13 00:47:55.906896 kubelet[2392]: I0813 00:47:55.906789 2392 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:47:55.906896 kubelet[2392]: I0813 00:47:55.906818 2392 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:55.907028 kubelet[2392]: I0813 00:47:55.907022 2392 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:47:55.933496 kubelet[2392]: I0813 00:47:55.933179 2392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:55.933922 kubelet[2392]: E0813 00:47:55.933887 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:47:55.939529 kubelet[2392]: I0813 00:47:55.939509 2392 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:55.945251 kubelet[2392]: I0813 00:47:55.945218 2392 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:55.945518 kubelet[2392]: I0813 00:47:55.945485 2392 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:55.945669 kubelet[2392]: I0813 00:47:55.945507 2392 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:55.945791 kubelet[2392]: I0813 00:47:55.945675 2392 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:55.945791 kubelet[2392]: I0813 00:47:55.945683 2392 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:47:55.946507 kubelet[2392]: I0813 00:47:55.946482 2392 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:55.948755 kubelet[2392]: I0813 00:47:55.948718 2392 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:47:55.948755 kubelet[2392]: I0813 00:47:55.948745 2392 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:55.948842 kubelet[2392]: I0813 00:47:55.948777 2392 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:47:55.950639 kubelet[2392]: I0813 00:47:55.950545 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:55.953511 kubelet[2392]: E0813 00:47:55.953478 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:47:55.953588 kubelet[2392]: E0813 00:47:55.953514 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:47:55.954784 kubelet[2392]: I0813 00:47:55.954765 2392 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:55.955223 kubelet[2392]: I0813 00:47:55.955193 2392 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:47:55.956397 kubelet[2392]: W0813 00:47:55.956376 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:47:55.959612 kubelet[2392]: I0813 00:47:55.959589 2392 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:47:55.959664 kubelet[2392]: I0813 00:47:55.959641 2392 server.go:1289] "Started kubelet" Aug 13 00:47:55.961428 kubelet[2392]: I0813 00:47:55.960226 2392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:55.962917 kubelet[2392]: I0813 00:47:55.962770 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:55.963361 kubelet[2392]: I0813 00:47:55.963307 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:55.966335 kubelet[2392]: E0813 00:47:55.964181 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d1b82041214 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:47:55.959611924 +0000 UTC m=+0.339269792,LastTimestamp:2025-08-13 00:47:55.959611924 +0000 UTC m=+0.339269792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:47:55.966335 kubelet[2392]: I0813 00:47:55.965826 2392 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:47:55.967014 kubelet[2392]: E0813 00:47:55.966985 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:55.967111 kubelet[2392]: I0813 00:47:55.967093 2392 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:47:55.967230 kubelet[2392]: I0813 00:47:55.967204 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:55.967842 kubelet[2392]: I0813 00:47:55.967821 2392 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:47:55.967979 kubelet[2392]: I0813 00:47:55.967967 2392 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:55.969068 kubelet[2392]: E0813 00:47:55.969048 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:47:55.969141 kubelet[2392]: E0813 00:47:55.969051 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Aug 13 00:47:55.969274 kubelet[2392]: I0813 00:47:55.969249 2392 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:55.969559 kubelet[2392]: I0813 00:47:55.969514 2392 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:47:55.969715 kubelet[2392]: I0813 00:47:55.969599 2392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:55.971726 kubelet[2392]: E0813 00:47:55.971694 2392 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:55.971952 kubelet[2392]: I0813 00:47:55.971929 2392 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:47:55.986348 kubelet[2392]: I0813 00:47:55.985565 2392 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:47:55.986348 kubelet[2392]: I0813 00:47:55.985583 2392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:55.986348 kubelet[2392]: I0813 00:47:55.985600 2392 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:55.986537 kubelet[2392]: I0813 00:47:55.986485 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:55.988041 kubelet[2392]: I0813 00:47:55.988017 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:55.988041 kubelet[2392]: I0813 00:47:55.988035 2392 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:47:55.988126 kubelet[2392]: I0813 00:47:55.988055 2392 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:47:55.988126 kubelet[2392]: I0813 00:47:55.988069 2392 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:47:55.988126 kubelet[2392]: E0813 00:47:55.988103 2392 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:55.989470 kubelet[2392]: E0813 00:47:55.989391 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:47:56.067348 kubelet[2392]: E0813 00:47:56.067282 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.088764 kubelet[2392]: E0813 00:47:56.088722 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:56.167739 kubelet[2392]: E0813 00:47:56.167603 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.170205 kubelet[2392]: E0813 00:47:56.170173 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Aug 13 00:47:56.268376 kubelet[2392]: E0813 00:47:56.268301 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.289712 kubelet[2392]: E0813 00:47:56.289645 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:56.369043 kubelet[2392]: E0813 00:47:56.368979 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.470148 kubelet[2392]: E0813 00:47:56.470090 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.499357 kubelet[2392]: I0813 00:47:56.498682 2392 policy_none.go:49] "None policy: Start" Aug 13 00:47:56.499357 kubelet[2392]: I0813 00:47:56.498728 2392 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:47:56.499357 kubelet[2392]: I0813 00:47:56.498742 2392 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:56.531336 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:47:56.575418 kubelet[2392]: E0813 00:47:56.571297 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:56.575418 kubelet[2392]: E0813 00:47:56.571812 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Aug 13 00:47:56.606399 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:47:56.628941 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:47:56.649836 kubelet[2392]: E0813 00:47:56.649625 2392 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:47:56.649953 kubelet[2392]: I0813 00:47:56.649932 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:56.649993 kubelet[2392]: I0813 00:47:56.649946 2392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:56.650224 kubelet[2392]: I0813 00:47:56.650191 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:56.651527 kubelet[2392]: E0813 00:47:56.651474 2392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:47:56.651916 kubelet[2392]: E0813 00:47:56.651539 2392 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:47:56.703313 systemd[1]: Created slice kubepods-burstable-podb7bad71874d9388d1942f96175ad9fba.slice - libcontainer container kubepods-burstable-podb7bad71874d9388d1942f96175ad9fba.slice. Aug 13 00:47:56.721487 kubelet[2392]: E0813 00:47:56.721308 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:56.724607 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 00:47:56.745039 kubelet[2392]: E0813 00:47:56.744998 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:56.747935 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 00:47:56.749836 kubelet[2392]: E0813 00:47:56.749803 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:56.751957 kubelet[2392]: I0813 00:47:56.751926 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:47:56.752419 kubelet[2392]: E0813 00:47:56.752378 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Aug 13 00:47:56.772829 kubelet[2392]: I0813 00:47:56.772795 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:56.772899 kubelet[2392]: I0813 00:47:56.772841 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:56.772899 kubelet[2392]: I0813 00:47:56.772860 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:56.772899 kubelet[2392]: I0813 00:47:56.772876 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:56.772990 kubelet[2392]: I0813 00:47:56.772914 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:47:56.773013 kubelet[2392]: I0813 00:47:56.772982 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:56.773081 kubelet[2392]: I0813 00:47:56.773018 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:56.773081 kubelet[2392]: I0813 00:47:56.773077 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:47:56.773226 kubelet[2392]: I0813 00:47:56.773104 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:47:56.819766 kubelet[2392]: E0813 00:47:56.819632 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:47:56.829858 kubelet[2392]: E0813 00:47:56.829799 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:47:56.901563 kubelet[2392]: E0813 00:47:56.901387 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d1b82041214 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:47:55.959611924 +0000 UTC m=+0.339269792,LastTimestamp:2025-08-13 00:47:55.959611924 +0000 UTC m=+0.339269792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:47:56.954185 kubelet[2392]: I0813 00:47:56.954139 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:47:56.954529 kubelet[2392]: E0813 00:47:56.954484 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Aug 13 00:47:57.022254 kubelet[2392]: E0813 00:47:57.022099 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.023062 containerd[1579]: time="2025-08-13T00:47:57.022961404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7bad71874d9388d1942f96175ad9fba,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:57.046212 kubelet[2392]: E0813 00:47:57.046170 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.046853 containerd[1579]: time="2025-08-13T00:47:57.046804836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:57.050739 kubelet[2392]: E0813 00:47:57.050467 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.051078 containerd[1579]: time="2025-08-13T00:47:57.051039798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:57.051160 containerd[1579]: time="2025-08-13T00:47:57.051123547Z" level=info msg="connecting to shim 4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2" address="unix:///run/containerd/s/c8a16cc2e3744f24a18809cfde96aed631807b3bcbf5a6e8ef0eee40e85aef11" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:57.074479 systemd[1]: Started cri-containerd-4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2.scope - libcontainer container 4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2. Aug 13 00:47:57.087351 containerd[1579]: time="2025-08-13T00:47:57.084754680Z" level=info msg="connecting to shim 35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353" address="unix:///run/containerd/s/0846a4ed1039ea8bb02ae3838e321ff92323493ce76ed9005324a295140faa1e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:57.099467 containerd[1579]: time="2025-08-13T00:47:57.099390392Z" level=info msg="connecting to shim fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04" address="unix:///run/containerd/s/e0f71b76c8c0e1b5d67320d6f8e45f2a46104443c1e80ce4f88642580e526580" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:57.116360 kubelet[2392]: E0813 00:47:57.115992 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:47:57.124465 systemd[1]: Started cri-containerd-35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353.scope - libcontainer container 35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353. Aug 13 00:47:57.127039 systemd[1]: Started cri-containerd-fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04.scope - libcontainer container fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04. Aug 13 00:47:57.147311 containerd[1579]: time="2025-08-13T00:47:57.147238804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b7bad71874d9388d1942f96175ad9fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2\"" Aug 13 00:47:57.149217 kubelet[2392]: E0813 00:47:57.149181 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.155398 containerd[1579]: time="2025-08-13T00:47:57.155354316Z" level=info msg="CreateContainer within sandbox \"4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:47:57.166776 containerd[1579]: time="2025-08-13T00:47:57.166736645Z" level=info msg="Container 7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:57.183547 containerd[1579]: time="2025-08-13T00:47:57.183428221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04\"" Aug 13 00:47:57.184308 kubelet[2392]: E0813 00:47:57.184277 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.184506 containerd[1579]: time="2025-08-13T00:47:57.184469173Z" level=info msg="CreateContainer within sandbox \"4c421c24b6e24f97ffef398b61897bda580121d23469f99e83e91c7ee180adc2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd\"" Aug 13 00:47:57.186346 containerd[1579]: time="2025-08-13T00:47:57.186304510Z" level=info msg="StartContainer for \"7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd\"" Aug 13 00:47:57.187775 containerd[1579]: time="2025-08-13T00:47:57.187736653Z" level=info msg="connecting to shim 7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd" address="unix:///run/containerd/s/c8a16cc2e3744f24a18809cfde96aed631807b3bcbf5a6e8ef0eee40e85aef11" protocol=ttrpc version=3 Aug 13 00:47:57.189173 containerd[1579]: time="2025-08-13T00:47:57.189123029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353\"" Aug 13 00:47:57.189756 kubelet[2392]: E0813 00:47:57.189733 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:57.191150 containerd[1579]: time="2025-08-13T00:47:57.191117567Z" level=info msg="CreateContainer within sandbox \"fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:47:57.195602 containerd[1579]: time="2025-08-13T00:47:57.195562656Z" level=info msg="CreateContainer within sandbox \"35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:47:57.202754 containerd[1579]: time="2025-08-13T00:47:57.202717869Z" level=info msg="Container ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:57.209239 systemd[1]: Started cri-containerd-7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd.scope - libcontainer container 7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd. Aug 13 00:47:57.213103 containerd[1579]: time="2025-08-13T00:47:57.211830760Z" level=info msg="CreateContainer within sandbox \"fe4bfac2d8c90f84f40b1b6a6d2a953816c7840274e3531af73f152b19d09f04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b\"" Aug 13 00:47:57.213103 containerd[1579]: time="2025-08-13T00:47:57.212382175Z" level=info msg="StartContainer for \"ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b\"" Aug 13 00:47:57.213311 containerd[1579]: time="2025-08-13T00:47:57.213262943Z" level=info msg="connecting to shim ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b" address="unix:///run/containerd/s/e0f71b76c8c0e1b5d67320d6f8e45f2a46104443c1e80ce4f88642580e526580" protocol=ttrpc version=3 Aug 13 00:47:57.224094 containerd[1579]: time="2025-08-13T00:47:57.224054544Z" level=info msg="Container 58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:57.234877 containerd[1579]: time="2025-08-13T00:47:57.234823331Z" level=info msg="CreateContainer within sandbox \"35249ae814531d3152f7a620fcf477af1fd9150b08b4bfb871ffd554c7881353\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352\"" Aug 13 00:47:57.235307 containerd[1579]: time="2025-08-13T00:47:57.235275007Z" level=info msg="StartContainer for \"58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352\"" Aug 13 00:47:57.240347 containerd[1579]: time="2025-08-13T00:47:57.238980626Z" level=info msg="connecting to shim 58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352" address="unix:///run/containerd/s/0846a4ed1039ea8bb02ae3838e321ff92323493ce76ed9005324a295140faa1e" protocol=ttrpc version=3 Aug 13 00:47:57.240594 systemd[1]: Started cri-containerd-ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b.scope - libcontainer container ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b. Aug 13 00:47:57.271514 systemd[1]: Started cri-containerd-58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352.scope - libcontainer container 58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352. Aug 13 00:47:57.297347 containerd[1579]: time="2025-08-13T00:47:57.295498028Z" level=info msg="StartContainer for \"7935ab76a74c778c5d7aa054da6c59466a31cfb057f83ace96ebe16a1f4036cd\" returns successfully" Aug 13 00:47:57.309364 containerd[1579]: time="2025-08-13T00:47:57.309286826Z" level=info msg="StartContainer for \"ee2f03a723a37fd369f6658c0c7f4683f94437941fedc96589426bd5e7e48b4b\" returns successfully" Aug 13 00:47:57.346086 containerd[1579]: time="2025-08-13T00:47:57.346020745Z" level=info msg="StartContainer for \"58d0cb68285b0d87015c2a8d3dbd6613b64e048575767a5c4ad56c0cb6757352\" returns successfully" Aug 13 00:47:57.356579 kubelet[2392]: I0813 00:47:57.356530 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:47:57.356985 kubelet[2392]: E0813 00:47:57.356952 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Aug 13 00:47:57.996820 kubelet[2392]: E0813 00:47:57.996557 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:57.996820 kubelet[2392]: E0813 00:47:57.996730 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:58.002017 kubelet[2392]: E0813 00:47:58.001812 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:58.002300 kubelet[2392]: E0813 00:47:58.002161 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:58.004096 kubelet[2392]: E0813 00:47:58.004077 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:58.004507 kubelet[2392]: E0813 00:47:58.004471 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:58.159516 kubelet[2392]: I0813 00:47:58.159479 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:47:59.008809 kubelet[2392]: E0813 00:47:59.008773 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:59.009828 kubelet[2392]: E0813 00:47:59.009406 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:59.009828 kubelet[2392]: E0813 00:47:59.009578 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:47:59.010035 kubelet[2392]: E0813 00:47:59.009951 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:47:59.147057 kubelet[2392]: E0813 00:47:59.146998 2392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:47:59.389739 kubelet[2392]: I0813 00:47:59.389277 2392 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:47:59.389739 kubelet[2392]: E0813 00:47:59.389377 2392 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:47:59.409531 kubelet[2392]: E0813 00:47:59.409465 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:59.510031 kubelet[2392]: E0813 00:47:59.509950 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:59.610912 kubelet[2392]: E0813 00:47:59.610830 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:59.711883 kubelet[2392]: E0813 00:47:59.711837 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:59.812742 kubelet[2392]: E0813 00:47:59.812681 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:47:59.913585 kubelet[2392]: E0813 00:47:59.913474 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:48:00.069521 kubelet[2392]: I0813 00:48:00.069366 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:00.076456 kubelet[2392]: E0813 00:48:00.076418 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:00.076456 kubelet[2392]: I0813 00:48:00.076453 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:00.078440 kubelet[2392]: E0813 00:48:00.078379 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:00.078440 kubelet[2392]: I0813 00:48:00.078414 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:00.080255 kubelet[2392]: E0813 00:48:00.080220 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:00.953894 kubelet[2392]: I0813 00:48:00.953817 2392 apiserver.go:52] "Watching apiserver" Aug 13 00:48:00.968946 kubelet[2392]: I0813 00:48:00.968892 2392 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:48:02.002844 kubelet[2392]: I0813 00:48:02.002803 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:02.008012 kubelet[2392]: E0813 00:48:02.007985 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:02.011250 kubelet[2392]: E0813 00:48:02.011230 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:02.051455 systemd[1]: Reload requested from client PID 2696 ('systemctl') (unit session-9.scope)... Aug 13 00:48:02.051468 systemd[1]: Reloading... Aug 13 00:48:02.131536 zram_generator::config[2742]: No configuration found. Aug 13 00:48:02.179533 kubelet[2392]: I0813 00:48:02.179506 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:02.259561 kubelet[2392]: E0813 00:48:02.259439 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:02.310362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:48:02.446920 systemd[1]: Reloading finished in 395 ms. Aug 13 00:48:02.474536 kubelet[2392]: I0813 00:48:02.474444 2392 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:48:02.474624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:48:02.487652 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:48:02.487962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:48:02.488015 systemd[1]: kubelet.service: Consumed 871ms CPU time, 132.3M memory peak. Aug 13 00:48:02.490147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:48:02.701944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:48:02.707021 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:48:02.748509 kubelet[2784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:48:02.749028 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:48:02.749028 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:48:02.749130 kubelet[2784]: I0813 00:48:02.749089 2784 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:48:02.755351 kubelet[2784]: I0813 00:48:02.755297 2784 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:48:02.755351 kubelet[2784]: I0813 00:48:02.755336 2784 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:48:02.755672 kubelet[2784]: I0813 00:48:02.755639 2784 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:48:02.757282 kubelet[2784]: I0813 00:48:02.757250 2784 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:48:02.760203 kubelet[2784]: I0813 00:48:02.760170 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:48:02.763440 kubelet[2784]: I0813 00:48:02.763414 2784 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:48:02.769973 kubelet[2784]: I0813 00:48:02.769941 2784 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:48:02.770248 kubelet[2784]: I0813 00:48:02.770219 2784 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:48:02.770447 kubelet[2784]: I0813 00:48:02.770254 2784 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:48:02.770447 kubelet[2784]: I0813 00:48:02.770444 2784 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:48:02.770549 kubelet[2784]: I0813 00:48:02.770454 2784 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:48:02.770549 kubelet[2784]: I0813 00:48:02.770508 2784 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:48:02.770687 kubelet[2784]: I0813 00:48:02.770672 2784 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:48:02.770714 kubelet[2784]: I0813 00:48:02.770690 2784 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:48:02.770747 kubelet[2784]: I0813 00:48:02.770714 2784 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:48:02.770747 kubelet[2784]: I0813 00:48:02.770730 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:48:02.772591 kubelet[2784]: I0813 00:48:02.772555 2784 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:48:02.777370 kubelet[2784]: I0813 00:48:02.774410 2784 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:48:02.784164 kubelet[2784]: I0813 00:48:02.784123 2784 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:48:02.784348 kubelet[2784]: I0813 00:48:02.784301 2784 server.go:1289] "Started kubelet" Aug 13 00:48:02.784496 kubelet[2784]: I0813 00:48:02.784445 2784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:48:02.788728 kubelet[2784]: I0813 00:48:02.788664 2784 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:48:02.790403 kubelet[2784]: I0813 00:48:02.790377 2784 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:48:02.791001 kubelet[2784]: E0813 00:48:02.790978 2784 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:48:02.791060 kubelet[2784]: I0813 00:48:02.791015 2784 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:48:02.791876 kubelet[2784]: I0813 00:48:02.791848 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:48:02.791993 kubelet[2784]: I0813 00:48:02.791964 2784 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:48:02.794285 kubelet[2784]: I0813 00:48:02.792493 2784 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:48:02.794285 kubelet[2784]: I0813 00:48:02.792103 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:48:02.794285 kubelet[2784]: I0813 00:48:02.792815 2784 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:48:02.797360 kubelet[2784]: I0813 00:48:02.797313 2784 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:48:02.797360 kubelet[2784]: I0813 00:48:02.797352 2784 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:48:02.797501 kubelet[2784]: I0813 00:48:02.797433 2784 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:48:02.808260 kubelet[2784]: I0813 00:48:02.808230 2784 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:48:02.809896 kubelet[2784]: I0813 00:48:02.809871 2784 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:48:02.809896 kubelet[2784]: I0813 00:48:02.809889 2784 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:48:02.809972 kubelet[2784]: I0813 00:48:02.809908 2784 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:48:02.809972 kubelet[2784]: I0813 00:48:02.809915 2784 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:48:02.810026 kubelet[2784]: E0813 00:48:02.809967 2784 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:48:02.836829 kubelet[2784]: I0813 00:48:02.836795 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:48:02.836829 kubelet[2784]: I0813 00:48:02.836814 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:48:02.836829 kubelet[2784]: I0813 00:48:02.836841 2784 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:48:02.837024 kubelet[2784]: I0813 00:48:02.836973 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:48:02.837024 kubelet[2784]: I0813 00:48:02.836987 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:48:02.837024 kubelet[2784]: I0813 00:48:02.837004 2784 policy_none.go:49] "None policy: Start" Aug 13 00:48:02.837024 kubelet[2784]: I0813 00:48:02.837013 2784 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:48:02.837024 kubelet[2784]: I0813 00:48:02.837024 2784 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:48:02.837176 kubelet[2784]: I0813 00:48:02.837102 2784 state_mem.go:75] "Updated machine memory state" Aug 13 00:48:02.841281 kubelet[2784]: E0813 00:48:02.841190 2784 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:48:02.841445 kubelet[2784]: I0813 00:48:02.841416 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:48:02.841493 kubelet[2784]: I0813 00:48:02.841436 2784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:48:02.843497 kubelet[2784]: I0813 00:48:02.843477 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:48:02.843737 kubelet[2784]: E0813 00:48:02.843714 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:48:02.911122 kubelet[2784]: I0813 00:48:02.910921 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:02.911122 kubelet[2784]: I0813 00:48:02.911005 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:02.911430 kubelet[2784]: I0813 00:48:02.911363 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:02.916898 kubelet[2784]: E0813 00:48:02.916864 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:02.917260 kubelet[2784]: E0813 00:48:02.917228 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:02.947656 kubelet[2784]: I0813 00:48:02.947632 2784 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:48:02.952348 kubelet[2784]: I0813 00:48:02.952231 2784 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:48:02.952348 kubelet[2784]: I0813 00:48:02.952308 2784 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:48:03.094198 kubelet[2784]: I0813 00:48:03.094157 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:03.094198 kubelet[2784]: I0813 00:48:03.094194 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:03.094426 kubelet[2784]: I0813 00:48:03.094219 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:03.094426 kubelet[2784]: I0813 00:48:03.094242 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.094426 kubelet[2784]: I0813 00:48:03.094260 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.094426 kubelet[2784]: I0813 00:48:03.094276 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7bad71874d9388d1942f96175ad9fba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b7bad71874d9388d1942f96175ad9fba\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:03.094426 kubelet[2784]: I0813 00:48:03.094293 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.094588 kubelet[2784]: I0813 00:48:03.094345 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.094588 kubelet[2784]: I0813 00:48:03.094368 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.216908 kubelet[2784]: E0813 00:48:03.216858 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.217233 kubelet[2784]: E0813 00:48:03.217203 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.217426 kubelet[2784]: E0813 00:48:03.217397 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.773154 kubelet[2784]: I0813 00:48:03.773087 2784 apiserver.go:52] "Watching apiserver" Aug 13 00:48:03.793502 kubelet[2784]: I0813 00:48:03.793432 2784 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:48:03.825034 kubelet[2784]: I0813 00:48:03.824765 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:03.825034 kubelet[2784]: I0813 00:48:03.824851 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.825034 kubelet[2784]: I0813 00:48:03.824901 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:03.833348 kubelet[2784]: E0813 00:48:03.832406 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:48:03.833348 kubelet[2784]: E0813 00:48:03.832591 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:48:03.833348 kubelet[2784]: E0813 00:48:03.832765 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.833348 kubelet[2784]: E0813 00:48:03.832797 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.833348 kubelet[2784]: E0813 00:48:03.832952 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:48:03.838128 kubelet[2784]: E0813 00:48:03.837268 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:03.846586 kubelet[2784]: I0813 00:48:03.846482 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.846440777 podStartE2EDuration="1.846440777s" podCreationTimestamp="2025-08-13 00:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:03.845787054 +0000 UTC m=+1.134039907" watchObservedRunningTime="2025-08-13 00:48:03.846440777 +0000 UTC m=+1.134693630" Aug 13 00:48:03.854352 kubelet[2784]: I0813 00:48:03.854266 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.854245543 podStartE2EDuration="1.854245543s" podCreationTimestamp="2025-08-13 00:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:03.854123903 +0000 UTC m=+1.142376756" watchObservedRunningTime="2025-08-13 00:48:03.854245543 +0000 UTC m=+1.142498406" Aug 13 00:48:03.870370 kubelet[2784]: I0813 00:48:03.869944 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.869922802 podStartE2EDuration="1.869922802s" podCreationTimestamp="2025-08-13 00:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:03.861402495 +0000 UTC m=+1.149655348" watchObservedRunningTime="2025-08-13 00:48:03.869922802 +0000 UTC m=+1.158175655" Aug 13 00:48:04.826226 kubelet[2784]: E0813 00:48:04.826191 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:04.826659 kubelet[2784]: E0813 00:48:04.826484 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:04.826923 kubelet[2784]: E0813 00:48:04.826889 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:05.827688 kubelet[2784]: E0813 00:48:05.827647 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:06.661913 kubelet[2784]: E0813 00:48:06.661869 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:07.893187 kubelet[2784]: I0813 00:48:07.893149 2784 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:48:07.893722 containerd[1579]: time="2025-08-13T00:48:07.893665072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:48:07.894046 kubelet[2784]: I0813 00:48:07.893909 2784 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:48:08.690422 systemd[1]: Created slice kubepods-besteffort-pod061a04fe_0eee_4700_a6c5_7f6450e5c8cb.slice - libcontainer container kubepods-besteffort-pod061a04fe_0eee_4700_a6c5_7f6450e5c8cb.slice. Aug 13 00:48:08.724234 kubelet[2784]: I0813 00:48:08.724154 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jr4\" (UniqueName: \"kubernetes.io/projected/061a04fe-0eee-4700-a6c5-7f6450e5c8cb-kube-api-access-r9jr4\") pod \"kube-proxy-2mz42\" (UID: \"061a04fe-0eee-4700-a6c5-7f6450e5c8cb\") " pod="kube-system/kube-proxy-2mz42" Aug 13 00:48:08.724234 kubelet[2784]: I0813 00:48:08.724214 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/061a04fe-0eee-4700-a6c5-7f6450e5c8cb-kube-proxy\") pod \"kube-proxy-2mz42\" (UID: \"061a04fe-0eee-4700-a6c5-7f6450e5c8cb\") " pod="kube-system/kube-proxy-2mz42" Aug 13 00:48:08.724494 kubelet[2784]: I0813 00:48:08.724288 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/061a04fe-0eee-4700-a6c5-7f6450e5c8cb-xtables-lock\") pod \"kube-proxy-2mz42\" (UID: \"061a04fe-0eee-4700-a6c5-7f6450e5c8cb\") " pod="kube-system/kube-proxy-2mz42" Aug 13 00:48:08.724494 kubelet[2784]: I0813 00:48:08.724369 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/061a04fe-0eee-4700-a6c5-7f6450e5c8cb-lib-modules\") pod \"kube-proxy-2mz42\" (UID: \"061a04fe-0eee-4700-a6c5-7f6450e5c8cb\") " pod="kube-system/kube-proxy-2mz42" Aug 13 00:48:09.007403 kubelet[2784]: E0813 00:48:09.007342 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:09.008264 containerd[1579]: time="2025-08-13T00:48:09.008175141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mz42,Uid:061a04fe-0eee-4700-a6c5-7f6450e5c8cb,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:09.281354 systemd[1]: Created slice kubepods-besteffort-pod99e7f05a_3791_4edb_97a0_be7fcc53d6e7.slice - libcontainer container kubepods-besteffort-pod99e7f05a_3791_4edb_97a0_be7fcc53d6e7.slice. Aug 13 00:48:09.294342 containerd[1579]: time="2025-08-13T00:48:09.293448195Z" level=info msg="connecting to shim 7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66" address="unix:///run/containerd/s/d48bfa0a46131a70501e4443e848149b9cd8d3a5b5ca1d54b7cb62b1a2b2aa27" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:09.327491 systemd[1]: Started cri-containerd-7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66.scope - libcontainer container 7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66. Aug 13 00:48:09.329287 kubelet[2784]: I0813 00:48:09.329264 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99e7f05a-3791-4edb-97a0-be7fcc53d6e7-var-lib-calico\") pod \"tigera-operator-747864d56d-jxv9n\" (UID: \"99e7f05a-3791-4edb-97a0-be7fcc53d6e7\") " pod="tigera-operator/tigera-operator-747864d56d-jxv9n" Aug 13 00:48:09.329485 kubelet[2784]: I0813 00:48:09.329290 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58pdn\" (UniqueName: \"kubernetes.io/projected/99e7f05a-3791-4edb-97a0-be7fcc53d6e7-kube-api-access-58pdn\") pod \"tigera-operator-747864d56d-jxv9n\" (UID: \"99e7f05a-3791-4edb-97a0-be7fcc53d6e7\") " pod="tigera-operator/tigera-operator-747864d56d-jxv9n" Aug 13 00:48:09.589975 containerd[1579]: time="2025-08-13T00:48:09.589856647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jxv9n,Uid:99e7f05a-3791-4edb-97a0-be7fcc53d6e7,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:48:09.670556 containerd[1579]: time="2025-08-13T00:48:09.670513160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mz42,Uid:061a04fe-0eee-4700-a6c5-7f6450e5c8cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66\"" Aug 13 00:48:09.672146 kubelet[2784]: E0813 00:48:09.672094 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:10.002399 containerd[1579]: time="2025-08-13T00:48:10.001489149Z" level=info msg="CreateContainer within sandbox \"7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:48:10.031355 kubelet[2784]: E0813 00:48:10.031241 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:10.321711 containerd[1579]: time="2025-08-13T00:48:10.321515887Z" level=info msg="Container 8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:10.324802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205495416.mount: Deactivated successfully. Aug 13 00:48:10.339100 containerd[1579]: time="2025-08-13T00:48:10.339058234Z" level=info msg="CreateContainer within sandbox \"7eb7800675408a41a913c0b4f7e4feb60cddbbd2a98b85d506973b34c4120f66\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719\"" Aug 13 00:48:10.339948 containerd[1579]: time="2025-08-13T00:48:10.339861968Z" level=info msg="StartContainer for \"8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719\"" Aug 13 00:48:10.343643 containerd[1579]: time="2025-08-13T00:48:10.343537089Z" level=info msg="connecting to shim 8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719" address="unix:///run/containerd/s/d48bfa0a46131a70501e4443e848149b9cd8d3a5b5ca1d54b7cb62b1a2b2aa27" protocol=ttrpc version=3 Aug 13 00:48:10.350147 containerd[1579]: time="2025-08-13T00:48:10.350096933Z" level=info msg="connecting to shim b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96" address="unix:///run/containerd/s/73e5e0d82fe091e92c9c6ed4f80219e3e6aac2fc54138cc1bdb6077da8a6c221" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:10.368493 systemd[1]: Started cri-containerd-8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719.scope - libcontainer container 8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719. Aug 13 00:48:10.386958 systemd[1]: Started cri-containerd-b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96.scope - libcontainer container b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96. Aug 13 00:48:10.422807 containerd[1579]: time="2025-08-13T00:48:10.422687450Z" level=info msg="StartContainer for \"8a825c3ae968d73dcab87b3d217443513881b5c9f528586e2fea88aad9a8d719\" returns successfully" Aug 13 00:48:10.443458 containerd[1579]: time="2025-08-13T00:48:10.443387484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jxv9n,Uid:99e7f05a-3791-4edb-97a0-be7fcc53d6e7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96\"" Aug 13 00:48:10.446130 containerd[1579]: time="2025-08-13T00:48:10.446098770Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:48:10.837580 kubelet[2784]: E0813 00:48:10.837482 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:10.838720 kubelet[2784]: E0813 00:48:10.838688 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:10.960484 kubelet[2784]: I0813 00:48:10.959287 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mz42" podStartSLOduration=2.959270072 podStartE2EDuration="2.959270072s" podCreationTimestamp="2025-08-13 00:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:10.958195318 +0000 UTC m=+8.246448171" watchObservedRunningTime="2025-08-13 00:48:10.959270072 +0000 UTC m=+8.247522925" Aug 13 00:48:11.840570 kubelet[2784]: E0813 00:48:11.840528 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:12.075179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830170163.mount: Deactivated successfully. Aug 13 00:48:12.976980 containerd[1579]: time="2025-08-13T00:48:12.976910854Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:12.977693 containerd[1579]: time="2025-08-13T00:48:12.977645417Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:48:12.978859 containerd[1579]: time="2025-08-13T00:48:12.978814587Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:12.981079 containerd[1579]: time="2025-08-13T00:48:12.981000592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:12.981684 containerd[1579]: time="2025-08-13T00:48:12.981639825Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.535507291s" Aug 13 00:48:12.981684 containerd[1579]: time="2025-08-13T00:48:12.981680461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:48:12.987046 containerd[1579]: time="2025-08-13T00:48:12.987010805Z" level=info msg="CreateContainer within sandbox \"b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:48:12.993921 containerd[1579]: time="2025-08-13T00:48:12.993862522Z" level=info msg="Container ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:13.003524 containerd[1579]: time="2025-08-13T00:48:13.003468571Z" level=info msg="CreateContainer within sandbox \"b2a428387dacab3b0b94767a955af3d4804fbb96f792a87c2194466125eafb96\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2\"" Aug 13 00:48:13.004061 containerd[1579]: time="2025-08-13T00:48:13.004023255Z" level=info msg="StartContainer for \"ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2\"" Aug 13 00:48:13.004888 containerd[1579]: time="2025-08-13T00:48:13.004858737Z" level=info msg="connecting to shim ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2" address="unix:///run/containerd/s/73e5e0d82fe091e92c9c6ed4f80219e3e6aac2fc54138cc1bdb6077da8a6c221" protocol=ttrpc version=3 Aug 13 00:48:13.063486 systemd[1]: Started cri-containerd-ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2.scope - libcontainer container ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2. Aug 13 00:48:13.094018 containerd[1579]: time="2025-08-13T00:48:13.093961854Z" level=info msg="StartContainer for \"ab53389ee50a43c22521229f4d1aa63713e6cf81c9e5476bf3f11078b635ade2\" returns successfully" Aug 13 00:48:14.725284 kubelet[2784]: E0813 00:48:14.724441 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:14.737657 kubelet[2784]: I0813 00:48:14.737583 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jxv9n" podStartSLOduration=4.200109939 podStartE2EDuration="6.737563405s" podCreationTimestamp="2025-08-13 00:48:08 +0000 UTC" firstStartedPulling="2025-08-13 00:48:10.445114326 +0000 UTC m=+7.733367179" lastFinishedPulling="2025-08-13 00:48:12.982567792 +0000 UTC m=+10.270820645" observedRunningTime="2025-08-13 00:48:13.852492629 +0000 UTC m=+11.140745482" watchObservedRunningTime="2025-08-13 00:48:14.737563405 +0000 UTC m=+12.025816258" Aug 13 00:48:14.847229 kubelet[2784]: E0813 00:48:14.847179 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:16.672348 kubelet[2784]: E0813 00:48:16.671587 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:16.849975 kubelet[2784]: E0813 00:48:16.849920 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:18.626391 sudo[1806]: pam_unix(sudo:session): session closed for user root Aug 13 00:48:18.630349 sshd[1805]: Connection closed by 10.0.0.1 port 55046 Aug 13 00:48:18.629126 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:18.635461 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:55046.service: Deactivated successfully. Aug 13 00:48:18.639852 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:48:18.641224 systemd[1]: session-9.scope: Consumed 6.652s CPU time, 222.7M memory peak. Aug 13 00:48:18.645167 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:48:18.647437 systemd-logind[1558]: Removed session 9. Aug 13 00:48:21.865261 systemd[1]: Created slice kubepods-besteffort-podb9b25b2a_af48_4fbd_b04e_0a18a2a7a2d4.slice - libcontainer container kubepods-besteffort-podb9b25b2a_af48_4fbd_b04e_0a18a2a7a2d4.slice. Aug 13 00:48:21.913089 kubelet[2784]: I0813 00:48:21.913041 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4-tigera-ca-bundle\") pod \"calico-typha-5794dfdb6-5cvp7\" (UID: \"b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4\") " pod="calico-system/calico-typha-5794dfdb6-5cvp7" Aug 13 00:48:21.913089 kubelet[2784]: I0813 00:48:21.913091 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4-typha-certs\") pod \"calico-typha-5794dfdb6-5cvp7\" (UID: \"b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4\") " pod="calico-system/calico-typha-5794dfdb6-5cvp7" Aug 13 00:48:21.913627 kubelet[2784]: I0813 00:48:21.913120 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5q22\" (UniqueName: \"kubernetes.io/projected/b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4-kube-api-access-g5q22\") pod \"calico-typha-5794dfdb6-5cvp7\" (UID: \"b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4\") " pod="calico-system/calico-typha-5794dfdb6-5cvp7" Aug 13 00:48:22.052231 systemd[1]: Created slice kubepods-besteffort-podf7dec02a_9f90_43a2_a46d_ffd7ab10f649.slice - libcontainer container kubepods-besteffort-podf7dec02a_9f90_43a2_a46d_ffd7ab10f649.slice. Aug 13 00:48:22.114561 kubelet[2784]: I0813 00:48:22.114500 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-cni-net-dir\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114561 kubelet[2784]: I0813 00:48:22.114561 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-flexvol-driver-host\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114750 kubelet[2784]: I0813 00:48:22.114592 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-policysync\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114750 kubelet[2784]: I0813 00:48:22.114618 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-lib-modules\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114750 kubelet[2784]: I0813 00:48:22.114640 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-node-certs\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114750 kubelet[2784]: I0813 00:48:22.114660 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-var-lib-calico\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114750 kubelet[2784]: I0813 00:48:22.114680 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-var-run-calico\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114907 kubelet[2784]: I0813 00:48:22.114698 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9m5\" (UniqueName: \"kubernetes.io/projected/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-kube-api-access-9h9m5\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114907 kubelet[2784]: I0813 00:48:22.114717 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-cni-bin-dir\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114907 kubelet[2784]: I0813 00:48:22.114734 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-tigera-ca-bundle\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114907 kubelet[2784]: I0813 00:48:22.114761 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-cni-log-dir\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.114907 kubelet[2784]: I0813 00:48:22.114780 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7dec02a-9f90-43a2-a46d-ffd7ab10f649-xtables-lock\") pod \"calico-node-tx7z2\" (UID: \"f7dec02a-9f90-43a2-a46d-ffd7ab10f649\") " pod="calico-system/calico-node-tx7z2" Aug 13 00:48:22.169178 kubelet[2784]: E0813 00:48:22.168780 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:22.171334 kubelet[2784]: E0813 00:48:22.171288 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:22.172204 containerd[1579]: time="2025-08-13T00:48:22.172146616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5794dfdb6-5cvp7,Uid:b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:22.217345 kubelet[2784]: I0813 00:48:22.216004 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f24ce14a-f0f7-482a-85c0-54374c86cafe-socket-dir\") pod \"csi-node-driver-brc88\" (UID: \"f24ce14a-f0f7-482a-85c0-54374c86cafe\") " pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:22.217345 kubelet[2784]: I0813 00:48:22.216047 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kwq\" (UniqueName: \"kubernetes.io/projected/f24ce14a-f0f7-482a-85c0-54374c86cafe-kube-api-access-t2kwq\") pod \"csi-node-driver-brc88\" (UID: \"f24ce14a-f0f7-482a-85c0-54374c86cafe\") " pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:22.217345 kubelet[2784]: I0813 00:48:22.216680 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f24ce14a-f0f7-482a-85c0-54374c86cafe-kubelet-dir\") pod \"csi-node-driver-brc88\" (UID: \"f24ce14a-f0f7-482a-85c0-54374c86cafe\") " pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:22.217561 kubelet[2784]: E0813 00:48:22.217439 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.217561 kubelet[2784]: W0813 00:48:22.217457 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.217561 kubelet[2784]: E0813 00:48:22.217510 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.217814 kubelet[2784]: E0813 00:48:22.217782 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.217814 kubelet[2784]: W0813 00:48:22.217807 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.217814 kubelet[2784]: E0813 00:48:22.217817 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.218058 kubelet[2784]: E0813 00:48:22.218034 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.218058 kubelet[2784]: W0813 00:48:22.218051 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.218144 kubelet[2784]: E0813 00:48:22.218063 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.218577 kubelet[2784]: E0813 00:48:22.218551 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.218577 kubelet[2784]: W0813 00:48:22.218571 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.218730 kubelet[2784]: E0813 00:48:22.218585 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.218931 kubelet[2784]: E0813 00:48:22.218912 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.218991 kubelet[2784]: W0813 00:48:22.218943 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.218991 kubelet[2784]: E0813 00:48:22.218957 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.219517 kubelet[2784]: E0813 00:48:22.219490 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.219517 kubelet[2784]: W0813 00:48:22.219508 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.219813 kubelet[2784]: E0813 00:48:22.219519 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.224403 kubelet[2784]: E0813 00:48:22.223433 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.224403 kubelet[2784]: W0813 00:48:22.224391 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.224538 kubelet[2784]: E0813 00:48:22.224414 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.224661 kubelet[2784]: E0813 00:48:22.224647 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.224661 kubelet[2784]: W0813 00:48:22.224659 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.224731 kubelet[2784]: E0813 00:48:22.224669 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.224868 kubelet[2784]: E0813 00:48:22.224855 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.224868 kubelet[2784]: W0813 00:48:22.224865 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.224924 kubelet[2784]: E0813 00:48:22.224873 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.225061 kubelet[2784]: E0813 00:48:22.225047 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.225061 kubelet[2784]: W0813 00:48:22.225058 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.225130 kubelet[2784]: E0813 00:48:22.225066 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.225246 kubelet[2784]: E0813 00:48:22.225232 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.225246 kubelet[2784]: W0813 00:48:22.225242 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.225295 kubelet[2784]: E0813 00:48:22.225250 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.225414 kubelet[2784]: E0813 00:48:22.225400 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.225414 kubelet[2784]: W0813 00:48:22.225411 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.225476 kubelet[2784]: E0813 00:48:22.225421 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.225476 kubelet[2784]: I0813 00:48:22.225457 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f24ce14a-f0f7-482a-85c0-54374c86cafe-registration-dir\") pod \"csi-node-driver-brc88\" (UID: \"f24ce14a-f0f7-482a-85c0-54374c86cafe\") " pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:22.225635 kubelet[2784]: E0813 00:48:22.225621 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.225635 kubelet[2784]: W0813 00:48:22.225632 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.225687 kubelet[2784]: E0813 00:48:22.225640 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.225907 kubelet[2784]: E0813 00:48:22.225892 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.225907 kubelet[2784]: W0813 00:48:22.225903 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.225959 kubelet[2784]: E0813 00:48:22.225913 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.226551 kubelet[2784]: E0813 00:48:22.226535 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.226551 kubelet[2784]: W0813 00:48:22.226547 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.226611 kubelet[2784]: E0813 00:48:22.226557 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.226611 kubelet[2784]: I0813 00:48:22.226575 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f24ce14a-f0f7-482a-85c0-54374c86cafe-varrun\") pod \"csi-node-driver-brc88\" (UID: \"f24ce14a-f0f7-482a-85c0-54374c86cafe\") " pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:22.226804 kubelet[2784]: E0813 00:48:22.226784 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.226839 kubelet[2784]: W0813 00:48:22.226802 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.226839 kubelet[2784]: E0813 00:48:22.226816 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.227030 kubelet[2784]: E0813 00:48:22.227011 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.227066 kubelet[2784]: W0813 00:48:22.227028 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.227066 kubelet[2784]: E0813 00:48:22.227042 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.227282 kubelet[2784]: E0813 00:48:22.227265 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.227282 kubelet[2784]: W0813 00:48:22.227280 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.227367 kubelet[2784]: E0813 00:48:22.227292 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.227516 kubelet[2784]: E0813 00:48:22.227497 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.227516 kubelet[2784]: W0813 00:48:22.227509 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.227579 kubelet[2784]: E0813 00:48:22.227520 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.227718 kubelet[2784]: E0813 00:48:22.227702 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.227747 kubelet[2784]: W0813 00:48:22.227717 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.227747 kubelet[2784]: E0813 00:48:22.227729 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.228003 kubelet[2784]: E0813 00:48:22.227985 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.228003 kubelet[2784]: W0813 00:48:22.228001 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.228069 kubelet[2784]: E0813 00:48:22.228013 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.228225 kubelet[2784]: E0813 00:48:22.228207 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.228376 kubelet[2784]: W0813 00:48:22.228267 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.228376 kubelet[2784]: E0813 00:48:22.228282 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.229285 kubelet[2784]: E0813 00:48:22.229266 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.229285 kubelet[2784]: W0813 00:48:22.229280 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.229388 kubelet[2784]: E0813 00:48:22.229290 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.229623 kubelet[2784]: E0813 00:48:22.229479 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.229623 kubelet[2784]: W0813 00:48:22.229492 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.229623 kubelet[2784]: E0813 00:48:22.229515 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.229815 kubelet[2784]: E0813 00:48:22.229767 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.229815 kubelet[2784]: W0813 00:48:22.229786 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.229815 kubelet[2784]: E0813 00:48:22.229798 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.230260 containerd[1579]: time="2025-08-13T00:48:22.230151948Z" level=info msg="connecting to shim a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18" address="unix:///run/containerd/s/c41409a6b611c5408d94bc76fa0ca0e07d3928359586b3779a3dd2116cc581ee" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:22.230686 kubelet[2784]: E0813 00:48:22.230668 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.230686 kubelet[2784]: W0813 00:48:22.230684 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.230779 kubelet[2784]: E0813 00:48:22.230697 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.231437 kubelet[2784]: E0813 00:48:22.231419 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.231437 kubelet[2784]: W0813 00:48:22.231434 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.231527 kubelet[2784]: E0813 00:48:22.231446 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.231684 kubelet[2784]: E0813 00:48:22.231669 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.231684 kubelet[2784]: W0813 00:48:22.231684 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.231840 kubelet[2784]: E0813 00:48:22.231695 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.236711 kubelet[2784]: E0813 00:48:22.236688 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.236711 kubelet[2784]: W0813 00:48:22.236706 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.236808 kubelet[2784]: E0813 00:48:22.236722 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.267608 systemd[1]: Started cri-containerd-a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18.scope - libcontainer container a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18. Aug 13 00:48:22.327338 kubelet[2784]: E0813 00:48:22.327251 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.327338 kubelet[2784]: W0813 00:48:22.327280 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.327338 kubelet[2784]: E0813 00:48:22.327306 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.327561 kubelet[2784]: E0813 00:48:22.327531 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.327561 kubelet[2784]: W0813 00:48:22.327542 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.327561 kubelet[2784]: E0813 00:48:22.327553 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.327794 kubelet[2784]: E0813 00:48:22.327776 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.327794 kubelet[2784]: W0813 00:48:22.327788 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.327794 kubelet[2784]: E0813 00:48:22.327798 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.327989 kubelet[2784]: E0813 00:48:22.327961 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.327989 kubelet[2784]: W0813 00:48:22.327980 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.327989 kubelet[2784]: E0813 00:48:22.327989 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.328212 kubelet[2784]: E0813 00:48:22.328170 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.328212 kubelet[2784]: W0813 00:48:22.328196 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.328212 kubelet[2784]: E0813 00:48:22.328209 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.328498 kubelet[2784]: E0813 00:48:22.328480 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.328498 kubelet[2784]: W0813 00:48:22.328491 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.328498 kubelet[2784]: E0813 00:48:22.328499 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.328697 kubelet[2784]: E0813 00:48:22.328681 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.328697 kubelet[2784]: W0813 00:48:22.328692 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.328756 kubelet[2784]: E0813 00:48:22.328701 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.328867 kubelet[2784]: E0813 00:48:22.328853 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.328867 kubelet[2784]: W0813 00:48:22.328862 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.328931 kubelet[2784]: E0813 00:48:22.328869 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.329024 kubelet[2784]: E0813 00:48:22.329010 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.329024 kubelet[2784]: W0813 00:48:22.329019 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.329100 kubelet[2784]: E0813 00:48:22.329026 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.329276 kubelet[2784]: E0813 00:48:22.329252 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.329276 kubelet[2784]: W0813 00:48:22.329262 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.329276 kubelet[2784]: E0813 00:48:22.329271 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.329569 kubelet[2784]: E0813 00:48:22.329547 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.329569 kubelet[2784]: W0813 00:48:22.329564 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.329644 kubelet[2784]: E0813 00:48:22.329590 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.329826 kubelet[2784]: E0813 00:48:22.329811 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.329826 kubelet[2784]: W0813 00:48:22.329824 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.329875 kubelet[2784]: E0813 00:48:22.329835 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.330052 kubelet[2784]: E0813 00:48:22.330037 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.330081 kubelet[2784]: W0813 00:48:22.330051 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.330081 kubelet[2784]: E0813 00:48:22.330062 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.330281 kubelet[2784]: E0813 00:48:22.330265 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.330281 kubelet[2784]: W0813 00:48:22.330279 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.330356 kubelet[2784]: E0813 00:48:22.330289 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.330529 kubelet[2784]: E0813 00:48:22.330505 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.330529 kubelet[2784]: W0813 00:48:22.330518 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.330529 kubelet[2784]: E0813 00:48:22.330529 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.330765 kubelet[2784]: E0813 00:48:22.330746 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.330765 kubelet[2784]: W0813 00:48:22.330760 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.330840 kubelet[2784]: E0813 00:48:22.330772 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331011 kubelet[2784]: E0813 00:48:22.330997 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331011 kubelet[2784]: W0813 00:48:22.331008 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331054 kubelet[2784]: E0813 00:48:22.331017 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331201 kubelet[2784]: E0813 00:48:22.331187 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331201 kubelet[2784]: W0813 00:48:22.331196 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331258 kubelet[2784]: E0813 00:48:22.331205 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331400 kubelet[2784]: E0813 00:48:22.331388 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331400 kubelet[2784]: W0813 00:48:22.331399 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331442 kubelet[2784]: E0813 00:48:22.331406 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331566 kubelet[2784]: E0813 00:48:22.331554 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331566 kubelet[2784]: W0813 00:48:22.331563 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331607 kubelet[2784]: E0813 00:48:22.331571 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331732 kubelet[2784]: E0813 00:48:22.331721 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331732 kubelet[2784]: W0813 00:48:22.331730 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331782 kubelet[2784]: E0813 00:48:22.331738 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.331896 kubelet[2784]: E0813 00:48:22.331885 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.331896 kubelet[2784]: W0813 00:48:22.331893 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.331939 kubelet[2784]: E0813 00:48:22.331901 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.332086 kubelet[2784]: E0813 00:48:22.332074 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.332086 kubelet[2784]: W0813 00:48:22.332083 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.332132 kubelet[2784]: E0813 00:48:22.332091 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.332264 kubelet[2784]: E0813 00:48:22.332251 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.332264 kubelet[2784]: W0813 00:48:22.332262 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.332307 kubelet[2784]: E0813 00:48:22.332269 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.332508 kubelet[2784]: E0813 00:48:22.332493 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.332508 kubelet[2784]: W0813 00:48:22.332503 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.332555 kubelet[2784]: E0813 00:48:22.332512 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.353114 containerd[1579]: time="2025-08-13T00:48:22.352996560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5794dfdb6-5cvp7,Uid:b9b25b2a-af48-4fbd-b04e-0a18a2a7a2d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18\"" Aug 13 00:48:22.354203 kubelet[2784]: E0813 00:48:22.354159 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:22.355723 containerd[1579]: time="2025-08-13T00:48:22.355688888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:48:22.357599 containerd[1579]: time="2025-08-13T00:48:22.357546788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tx7z2,Uid:f7dec02a-9f90-43a2-a46d-ffd7ab10f649,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:22.360146 kubelet[2784]: E0813 00:48:22.360107 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:22.360146 kubelet[2784]: W0813 00:48:22.360132 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:22.360146 kubelet[2784]: E0813 00:48:22.360154 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:22.398618 containerd[1579]: time="2025-08-13T00:48:22.398511243Z" level=info msg="connecting to shim 9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e" address="unix:///run/containerd/s/7c640c90355fed9fe199f8da5ff4f9b73c4233a42b59e34fb38fa6161d05eb57" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:22.424569 systemd[1]: Started cri-containerd-9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e.scope - libcontainer container 9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e. Aug 13 00:48:22.457374 containerd[1579]: time="2025-08-13T00:48:22.457293674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tx7z2,Uid:f7dec02a-9f90-43a2-a46d-ffd7ab10f649,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\"" Aug 13 00:48:23.810863 kubelet[2784]: E0813 00:48:23.810799 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:24.758396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774849959.mount: Deactivated successfully. Aug 13 00:48:25.124841 containerd[1579]: time="2025-08-13T00:48:25.124697223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:25.125969 containerd[1579]: time="2025-08-13T00:48:25.125901605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:48:25.127226 containerd[1579]: time="2025-08-13T00:48:25.127180708Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:25.129255 containerd[1579]: time="2025-08-13T00:48:25.129228114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:25.129800 containerd[1579]: time="2025-08-13T00:48:25.129762106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.774042551s" Aug 13 00:48:25.129846 containerd[1579]: time="2025-08-13T00:48:25.129802712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:48:25.131036 containerd[1579]: time="2025-08-13T00:48:25.130882992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:48:25.146023 containerd[1579]: time="2025-08-13T00:48:25.145912338Z" level=info msg="CreateContainer within sandbox \"a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:48:25.153379 containerd[1579]: time="2025-08-13T00:48:25.153303842Z" level=info msg="Container 7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:25.161299 containerd[1579]: time="2025-08-13T00:48:25.161252089Z" level=info msg="CreateContainer within sandbox \"a7401515921c15712d978ac375177fc5aac8c14326877272ec469fcaa3f8be18\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51\"" Aug 13 00:48:25.161915 containerd[1579]: time="2025-08-13T00:48:25.161851053Z" level=info msg="StartContainer for \"7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51\"" Aug 13 00:48:25.163081 containerd[1579]: time="2025-08-13T00:48:25.163004991Z" level=info msg="connecting to shim 7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51" address="unix:///run/containerd/s/c41409a6b611c5408d94bc76fa0ca0e07d3928359586b3779a3dd2116cc581ee" protocol=ttrpc version=3 Aug 13 00:48:25.185461 systemd[1]: Started cri-containerd-7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51.scope - libcontainer container 7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51. Aug 13 00:48:25.425083 containerd[1579]: time="2025-08-13T00:48:25.424906018Z" level=info msg="StartContainer for \"7f7b762f4fc7d17a66123bb78106d681ad8653f7235f7766310de54af67a3d51\" returns successfully" Aug 13 00:48:25.810691 kubelet[2784]: E0813 00:48:25.810609 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:25.873004 kubelet[2784]: E0813 00:48:25.872934 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:25.894939 kubelet[2784]: I0813 00:48:25.894856 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5794dfdb6-5cvp7" podStartSLOduration=2.11964779 podStartE2EDuration="4.894833989s" podCreationTimestamp="2025-08-13 00:48:21 +0000 UTC" firstStartedPulling="2025-08-13 00:48:22.355460118 +0000 UTC m=+19.643712971" lastFinishedPulling="2025-08-13 00:48:25.130646317 +0000 UTC m=+22.418899170" observedRunningTime="2025-08-13 00:48:25.885279484 +0000 UTC m=+23.173532357" watchObservedRunningTime="2025-08-13 00:48:25.894833989 +0000 UTC m=+23.183086842" Aug 13 00:48:25.914706 kubelet[2784]: E0813 00:48:25.914444 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.914706 kubelet[2784]: W0813 00:48:25.914479 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.914706 kubelet[2784]: E0813 00:48:25.914509 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.915112 kubelet[2784]: E0813 00:48:25.915080 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.915195 kubelet[2784]: W0813 00:48:25.915180 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.915365 kubelet[2784]: E0813 00:48:25.915250 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.915553 kubelet[2784]: E0813 00:48:25.915538 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.915784 kubelet[2784]: W0813 00:48:25.915624 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.915784 kubelet[2784]: E0813 00:48:25.915642 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.915985 kubelet[2784]: E0813 00:48:25.915968 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.916063 kubelet[2784]: W0813 00:48:25.916048 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.916153 kubelet[2784]: E0813 00:48:25.916136 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.916450 kubelet[2784]: E0813 00:48:25.916435 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.916603 kubelet[2784]: W0813 00:48:25.916535 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.916603 kubelet[2784]: E0813 00:48:25.916555 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.916895 kubelet[2784]: E0813 00:48:25.916877 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.917020 kubelet[2784]: W0813 00:48:25.916958 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.917020 kubelet[2784]: E0813 00:48:25.916975 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.917301 kubelet[2784]: E0813 00:48:25.917286 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.917526 kubelet[2784]: W0813 00:48:25.917404 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.917526 kubelet[2784]: E0813 00:48:25.917423 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.917749 kubelet[2784]: E0813 00:48:25.917670 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.917749 kubelet[2784]: W0813 00:48:25.917684 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.917749 kubelet[2784]: E0813 00:48:25.917696 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.918172 kubelet[2784]: E0813 00:48:25.918047 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.918172 kubelet[2784]: W0813 00:48:25.918061 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.918172 kubelet[2784]: E0813 00:48:25.918072 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.918393 kubelet[2784]: E0813 00:48:25.918370 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.918393 kubelet[2784]: W0813 00:48:25.918384 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.918393 kubelet[2784]: E0813 00:48:25.918396 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.918630 kubelet[2784]: E0813 00:48:25.918612 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.918630 kubelet[2784]: W0813 00:48:25.918625 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.918721 kubelet[2784]: E0813 00:48:25.918637 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.918878 kubelet[2784]: E0813 00:48:25.918860 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.918878 kubelet[2784]: W0813 00:48:25.918872 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.918975 kubelet[2784]: E0813 00:48:25.918884 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.919125 kubelet[2784]: E0813 00:48:25.919106 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.919125 kubelet[2784]: W0813 00:48:25.919119 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.919216 kubelet[2784]: E0813 00:48:25.919129 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.919389 kubelet[2784]: E0813 00:48:25.919370 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.919389 kubelet[2784]: W0813 00:48:25.919382 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.919473 kubelet[2784]: E0813 00:48:25.919393 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.919602 kubelet[2784]: E0813 00:48:25.919586 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.919602 kubelet[2784]: W0813 00:48:25.919597 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.919799 kubelet[2784]: E0813 00:48:25.919608 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.952143 kubelet[2784]: E0813 00:48:25.952099 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.952143 kubelet[2784]: W0813 00:48:25.952124 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.952143 kubelet[2784]: E0813 00:48:25.952151 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.952565 kubelet[2784]: E0813 00:48:25.952456 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.952565 kubelet[2784]: W0813 00:48:25.952472 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.952565 kubelet[2784]: E0813 00:48:25.952482 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.952974 kubelet[2784]: E0813 00:48:25.952927 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.952974 kubelet[2784]: W0813 00:48:25.952960 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.953049 kubelet[2784]: E0813 00:48:25.952989 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.953262 kubelet[2784]: E0813 00:48:25.953244 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.953262 kubelet[2784]: W0813 00:48:25.953256 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.953262 kubelet[2784]: E0813 00:48:25.953266 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.953514 kubelet[2784]: E0813 00:48:25.953496 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.953514 kubelet[2784]: W0813 00:48:25.953508 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.953564 kubelet[2784]: E0813 00:48:25.953518 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.953783 kubelet[2784]: E0813 00:48:25.953768 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.953783 kubelet[2784]: W0813 00:48:25.953780 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.953865 kubelet[2784]: E0813 00:48:25.953791 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.954200 kubelet[2784]: E0813 00:48:25.954162 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.954200 kubelet[2784]: W0813 00:48:25.954174 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.954200 kubelet[2784]: E0813 00:48:25.954183 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.954482 kubelet[2784]: E0813 00:48:25.954463 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.954482 kubelet[2784]: W0813 00:48:25.954475 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.954545 kubelet[2784]: E0813 00:48:25.954485 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.954661 kubelet[2784]: E0813 00:48:25.954645 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.954661 kubelet[2784]: W0813 00:48:25.954658 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.954715 kubelet[2784]: E0813 00:48:25.954667 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.954872 kubelet[2784]: E0813 00:48:25.954855 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.954872 kubelet[2784]: W0813 00:48:25.954866 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.954927 kubelet[2784]: E0813 00:48:25.954874 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.955040 kubelet[2784]: E0813 00:48:25.955023 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.955040 kubelet[2784]: W0813 00:48:25.955034 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.955094 kubelet[2784]: E0813 00:48:25.955042 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.955202 kubelet[2784]: E0813 00:48:25.955186 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.955202 kubelet[2784]: W0813 00:48:25.955197 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.955251 kubelet[2784]: E0813 00:48:25.955204 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.955411 kubelet[2784]: E0813 00:48:25.955395 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.955411 kubelet[2784]: W0813 00:48:25.955406 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.955462 kubelet[2784]: E0813 00:48:25.955417 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.955720 kubelet[2784]: E0813 00:48:25.955700 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.955720 kubelet[2784]: W0813 00:48:25.955715 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.955781 kubelet[2784]: E0813 00:48:25.955725 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.955963 kubelet[2784]: E0813 00:48:25.955946 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.955963 kubelet[2784]: W0813 00:48:25.955958 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.956010 kubelet[2784]: E0813 00:48:25.955968 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.956258 kubelet[2784]: E0813 00:48:25.956226 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.956258 kubelet[2784]: W0813 00:48:25.956246 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.956258 kubelet[2784]: E0813 00:48:25.956256 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.956663 kubelet[2784]: E0813 00:48:25.956624 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.956663 kubelet[2784]: W0813 00:48:25.956654 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.956718 kubelet[2784]: E0813 00:48:25.956680 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:25.956908 kubelet[2784]: E0813 00:48:25.956882 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:48:25.956908 kubelet[2784]: W0813 00:48:25.956894 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:48:25.956908 kubelet[2784]: E0813 00:48:25.956903 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:48:26.596837 containerd[1579]: time="2025-08-13T00:48:26.596774536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:26.597583 containerd[1579]: time="2025-08-13T00:48:26.597558569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:48:26.599066 containerd[1579]: time="2025-08-13T00:48:26.599006398Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:26.601612 containerd[1579]: time="2025-08-13T00:48:26.601570023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:26.602471 containerd[1579]: time="2025-08-13T00:48:26.602421822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.47150655s" Aug 13 00:48:26.602471 containerd[1579]: time="2025-08-13T00:48:26.602465925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:48:26.607755 containerd[1579]: time="2025-08-13T00:48:26.607714492Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:48:26.618349 containerd[1579]: time="2025-08-13T00:48:26.618289902Z" level=info msg="Container ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:26.628730 containerd[1579]: time="2025-08-13T00:48:26.628674353Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\"" Aug 13 00:48:26.629301 containerd[1579]: time="2025-08-13T00:48:26.629272586Z" level=info msg="StartContainer for \"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\"" Aug 13 00:48:26.630997 containerd[1579]: time="2025-08-13T00:48:26.630964383Z" level=info msg="connecting to shim ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4" address="unix:///run/containerd/s/7c640c90355fed9fe199f8da5ff4f9b73c4233a42b59e34fb38fa6161d05eb57" protocol=ttrpc version=3 Aug 13 00:48:26.656560 systemd[1]: Started cri-containerd-ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4.scope - libcontainer container ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4. Aug 13 00:48:26.715043 containerd[1579]: time="2025-08-13T00:48:26.714996030Z" level=info msg="StartContainer for \"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\" returns successfully" Aug 13 00:48:26.727413 systemd[1]: cri-containerd-ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4.scope: Deactivated successfully. Aug 13 00:48:26.727802 systemd[1]: cri-containerd-ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4.scope: Consumed 43ms CPU time, 6.3M memory peak, 4.2M written to disk. Aug 13 00:48:26.729819 containerd[1579]: time="2025-08-13T00:48:26.729775075Z" level=info msg="received exit event container_id:\"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\" id:\"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\" pid:3455 exited_at:{seconds:1755046106 nanos:729205835}" Aug 13 00:48:26.730149 containerd[1579]: time="2025-08-13T00:48:26.729910058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\" id:\"ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4\" pid:3455 exited_at:{seconds:1755046106 nanos:729205835}" Aug 13 00:48:26.758299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebb36dc8bc7b79f02c325ff25c3c87be97412ff414463e48f6c1688e085cedf4-rootfs.mount: Deactivated successfully. Aug 13 00:48:26.876657 kubelet[2784]: E0813 00:48:26.876480 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:27.810672 kubelet[2784]: E0813 00:48:27.810578 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:27.880182 kubelet[2784]: E0813 00:48:27.880137 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:27.880918 containerd[1579]: time="2025-08-13T00:48:27.880873953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:48:29.810908 kubelet[2784]: E0813 00:48:29.810823 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:30.672309 containerd[1579]: time="2025-08-13T00:48:30.672236842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:30.674053 containerd[1579]: time="2025-08-13T00:48:30.673979895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:48:30.675887 containerd[1579]: time="2025-08-13T00:48:30.675842972Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:30.678208 containerd[1579]: time="2025-08-13T00:48:30.678162195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:30.678863 containerd[1579]: time="2025-08-13T00:48:30.678827084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.797912075s" Aug 13 00:48:30.678863 containerd[1579]: time="2025-08-13T00:48:30.678858182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:48:30.685153 containerd[1579]: time="2025-08-13T00:48:30.685084340Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:48:30.694762 containerd[1579]: time="2025-08-13T00:48:30.694700242Z" level=info msg="Container 2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:30.705504 containerd[1579]: time="2025-08-13T00:48:30.705443030Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\"" Aug 13 00:48:30.706171 containerd[1579]: time="2025-08-13T00:48:30.706135289Z" level=info msg="StartContainer for \"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\"" Aug 13 00:48:30.707631 containerd[1579]: time="2025-08-13T00:48:30.707595811Z" level=info msg="connecting to shim 2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c" address="unix:///run/containerd/s/7c640c90355fed9fe199f8da5ff4f9b73c4233a42b59e34fb38fa6161d05eb57" protocol=ttrpc version=3 Aug 13 00:48:30.732508 systemd[1]: Started cri-containerd-2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c.scope - libcontainer container 2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c. Aug 13 00:48:30.781247 containerd[1579]: time="2025-08-13T00:48:30.781183675Z" level=info msg="StartContainer for \"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\" returns successfully" Aug 13 00:48:31.811243 kubelet[2784]: E0813 00:48:31.811158 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:32.716512 systemd[1]: cri-containerd-2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c.scope: Deactivated successfully. Aug 13 00:48:32.716866 systemd[1]: cri-containerd-2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c.scope: Consumed 591ms CPU time, 178.6M memory peak, 4M read from disk, 171.2M written to disk. Aug 13 00:48:32.717699 containerd[1579]: time="2025-08-13T00:48:32.717615771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\" id:\"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\" pid:3514 exited_at:{seconds:1755046112 nanos:717225018}" Aug 13 00:48:32.717699 containerd[1579]: time="2025-08-13T00:48:32.717638684Z" level=info msg="received exit event container_id:\"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\" id:\"2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c\" pid:3514 exited_at:{seconds:1755046112 nanos:717225018}" Aug 13 00:48:32.742271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2523163bd824f026d2d610a4db25ff24463b40195aa672abf1f4624cfad1d44c-rootfs.mount: Deactivated successfully. Aug 13 00:48:32.762197 kubelet[2784]: I0813 00:48:32.762123 2784 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:48:33.665465 systemd[1]: Created slice kubepods-burstable-podf938cdc6_5bc4_4598_81a5_977a60182bc5.slice - libcontainer container kubepods-burstable-podf938cdc6_5bc4_4598_81a5_977a60182bc5.slice. Aug 13 00:48:33.674273 systemd[1]: Created slice kubepods-burstable-podaac7b78c_6f96_4a82_a13f_2a2f78994458.slice - libcontainer container kubepods-burstable-podaac7b78c_6f96_4a82_a13f_2a2f78994458.slice. Aug 13 00:48:33.688829 systemd[1]: Created slice kubepods-besteffort-podf1de7ba4_0c6c_47ea_b4ec_b558b4aa3dfa.slice - libcontainer container kubepods-besteffort-podf1de7ba4_0c6c_47ea_b4ec_b558b4aa3dfa.slice. Aug 13 00:48:33.696277 systemd[1]: Created slice kubepods-besteffort-pode07603a2_f1fc_4a47_8272_69e765b2006f.slice - libcontainer container kubepods-besteffort-pode07603a2_f1fc_4a47_8272_69e765b2006f.slice. Aug 13 00:48:33.703982 systemd[1]: Created slice kubepods-besteffort-pod21718867_e82f_4101_96f7_927efea081bd.slice - libcontainer container kubepods-besteffort-pod21718867_e82f_4101_96f7_927efea081bd.slice. Aug 13 00:48:33.714985 systemd[1]: Created slice kubepods-besteffort-podb945f367_f37a_44ef_8f01_cfbf0e613602.slice - libcontainer container kubepods-besteffort-podb945f367_f37a_44ef_8f01_cfbf0e613602.slice. Aug 13 00:48:33.726236 systemd[1]: Created slice kubepods-besteffort-pod9c143e7a_a5e8_41db_8501_2d62ae8b235b.slice - libcontainer container kubepods-besteffort-pod9c143e7a_a5e8_41db_8501_2d62ae8b235b.slice. Aug 13 00:48:33.748605 kubelet[2784]: I0813 00:48:33.748551 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkxm\" (UniqueName: \"kubernetes.io/projected/f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa-kube-api-access-8gkxm\") pod \"calico-apiserver-794868555d-mqg9c\" (UID: \"f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa\") " pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" Aug 13 00:48:33.748605 kubelet[2784]: I0813 00:48:33.748594 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hpzr\" (UniqueName: \"kubernetes.io/projected/e07603a2-f1fc-4a47-8272-69e765b2006f-kube-api-access-4hpzr\") pod \"goldmane-768f4c5c69-7vwqp\" (UID: \"e07603a2-f1fc-4a47-8272-69e765b2006f\") " pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:33.748605 kubelet[2784]: I0813 00:48:33.748611 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21718867-e82f-4101-96f7-927efea081bd-whisker-backend-key-pair\") pod \"whisker-864f7c9fdc-bx7pf\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " pod="calico-system/whisker-864f7c9fdc-bx7pf" Aug 13 00:48:33.749101 kubelet[2784]: I0813 00:48:33.748626 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89ch\" (UniqueName: \"kubernetes.io/projected/21718867-e82f-4101-96f7-927efea081bd-kube-api-access-t89ch\") pod \"whisker-864f7c9fdc-bx7pf\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " pod="calico-system/whisker-864f7c9fdc-bx7pf" Aug 13 00:48:33.749101 kubelet[2784]: I0813 00:48:33.748640 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fzc\" (UniqueName: \"kubernetes.io/projected/aac7b78c-6f96-4a82-a13f-2a2f78994458-kube-api-access-l2fzc\") pod \"coredns-674b8bbfcf-fc6ds\" (UID: \"aac7b78c-6f96-4a82-a13f-2a2f78994458\") " pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:33.749101 kubelet[2784]: I0813 00:48:33.748656 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b945f367-f37a-44ef-8f01-cfbf0e613602-calico-apiserver-certs\") pod \"calico-apiserver-794868555d-t446f\" (UID: \"b945f367-f37a-44ef-8f01-cfbf0e613602\") " pod="calico-apiserver/calico-apiserver-794868555d-t446f" Aug 13 00:48:33.749101 kubelet[2784]: I0813 00:48:33.748670 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76vc\" (UniqueName: \"kubernetes.io/projected/f938cdc6-5bc4-4598-81a5-977a60182bc5-kube-api-access-z76vc\") pod \"coredns-674b8bbfcf-ddq4s\" (UID: \"f938cdc6-5bc4-4598-81a5-977a60182bc5\") " pod="kube-system/coredns-674b8bbfcf-ddq4s" Aug 13 00:48:33.749101 kubelet[2784]: I0813 00:48:33.748686 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e07603a2-f1fc-4a47-8272-69e765b2006f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-7vwqp\" (UID: \"e07603a2-f1fc-4a47-8272-69e765b2006f\") " pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:33.749221 kubelet[2784]: I0813 00:48:33.748706 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f938cdc6-5bc4-4598-81a5-977a60182bc5-config-volume\") pod \"coredns-674b8bbfcf-ddq4s\" (UID: \"f938cdc6-5bc4-4598-81a5-977a60182bc5\") " pod="kube-system/coredns-674b8bbfcf-ddq4s" Aug 13 00:48:33.749221 kubelet[2784]: I0813 00:48:33.748720 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e07603a2-f1fc-4a47-8272-69e765b2006f-config\") pod \"goldmane-768f4c5c69-7vwqp\" (UID: \"e07603a2-f1fc-4a47-8272-69e765b2006f\") " pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:33.749221 kubelet[2784]: I0813 00:48:33.748735 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e07603a2-f1fc-4a47-8272-69e765b2006f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-7vwqp\" (UID: \"e07603a2-f1fc-4a47-8272-69e765b2006f\") " pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:33.749221 kubelet[2784]: I0813 00:48:33.748749 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mqm\" (UniqueName: \"kubernetes.io/projected/b945f367-f37a-44ef-8f01-cfbf0e613602-kube-api-access-t7mqm\") pod \"calico-apiserver-794868555d-t446f\" (UID: \"b945f367-f37a-44ef-8f01-cfbf0e613602\") " pod="calico-apiserver/calico-apiserver-794868555d-t446f" Aug 13 00:48:33.749221 kubelet[2784]: I0813 00:48:33.748771 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c143e7a-a5e8-41db-8501-2d62ae8b235b-tigera-ca-bundle\") pod \"calico-kube-controllers-6bf97cf4d8-hhpm6\" (UID: \"9c143e7a-a5e8-41db-8501-2d62ae8b235b\") " pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" Aug 13 00:48:33.749372 kubelet[2784]: I0813 00:48:33.748790 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n45nh\" (UniqueName: \"kubernetes.io/projected/9c143e7a-a5e8-41db-8501-2d62ae8b235b-kube-api-access-n45nh\") pod \"calico-kube-controllers-6bf97cf4d8-hhpm6\" (UID: \"9c143e7a-a5e8-41db-8501-2d62ae8b235b\") " pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" Aug 13 00:48:33.749372 kubelet[2784]: I0813 00:48:33.748808 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aac7b78c-6f96-4a82-a13f-2a2f78994458-config-volume\") pod \"coredns-674b8bbfcf-fc6ds\" (UID: \"aac7b78c-6f96-4a82-a13f-2a2f78994458\") " pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:33.749372 kubelet[2784]: I0813 00:48:33.748825 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21718867-e82f-4101-96f7-927efea081bd-whisker-ca-bundle\") pod \"whisker-864f7c9fdc-bx7pf\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " pod="calico-system/whisker-864f7c9fdc-bx7pf" Aug 13 00:48:33.749372 kubelet[2784]: I0813 00:48:33.748841 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa-calico-apiserver-certs\") pod \"calico-apiserver-794868555d-mqg9c\" (UID: \"f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa\") " pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" Aug 13 00:48:33.817799 systemd[1]: Created slice kubepods-besteffort-podf24ce14a_f0f7_482a_85c0_54374c86cafe.slice - libcontainer container kubepods-besteffort-podf24ce14a_f0f7_482a_85c0_54374c86cafe.slice. Aug 13 00:48:33.820724 containerd[1579]: time="2025-08-13T00:48:33.820674420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brc88,Uid:f24ce14a-f0f7-482a-85c0-54374c86cafe,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:33.901805 containerd[1579]: time="2025-08-13T00:48:33.901755759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:48:33.969805 kubelet[2784]: E0813 00:48:33.969769 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:33.970439 containerd[1579]: time="2025-08-13T00:48:33.970395137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddq4s,Uid:f938cdc6-5bc4-4598-81a5-977a60182bc5,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:33.978670 kubelet[2784]: E0813 00:48:33.978642 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:33.979070 containerd[1579]: time="2025-08-13T00:48:33.978971023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:33.993587 containerd[1579]: time="2025-08-13T00:48:33.993555566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-mqg9c,Uid:f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:48:34.001116 containerd[1579]: time="2025-08-13T00:48:34.001091761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7vwqp,Uid:e07603a2-f1fc-4a47-8272-69e765b2006f,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:34.008663 containerd[1579]: time="2025-08-13T00:48:34.008634727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864f7c9fdc-bx7pf,Uid:21718867-e82f-4101-96f7-927efea081bd,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:34.022289 containerd[1579]: time="2025-08-13T00:48:34.022243968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-t446f,Uid:b945f367-f37a-44ef-8f01-cfbf0e613602,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:48:34.030804 containerd[1579]: time="2025-08-13T00:48:34.030773115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf97cf4d8-hhpm6,Uid:9c143e7a-a5e8-41db-8501-2d62ae8b235b,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:34.306753 containerd[1579]: time="2025-08-13T00:48:34.306594425Z" level=error msg="Failed to destroy network for sandbox \"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.383003 containerd[1579]: time="2025-08-13T00:48:34.382744691Z" level=error msg="Failed to destroy network for sandbox \"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.401283 containerd[1579]: time="2025-08-13T00:48:34.393805600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864f7c9fdc-bx7pf,Uid:21718867-e82f-4101-96f7-927efea081bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.403011 containerd[1579]: time="2025-08-13T00:48:34.394902118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brc88,Uid:f24ce14a-f0f7-482a-85c0-54374c86cafe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.412520 containerd[1579]: time="2025-08-13T00:48:34.412443598Z" level=error msg="Failed to destroy network for sandbox \"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.418522 containerd[1579]: time="2025-08-13T00:48:34.418390920Z" level=error msg="Failed to destroy network for sandbox \"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.419481 kubelet[2784]: E0813 00:48:34.419292 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.419979 kubelet[2784]: E0813 00:48:34.419931 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.420071 kubelet[2784]: E0813 00:48:34.420038 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:34.420156 kubelet[2784]: E0813 00:48:34.420101 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-brc88" Aug 13 00:48:34.420340 kubelet[2784]: E0813 00:48:34.420197 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-brc88_calico-system(f24ce14a-f0f7-482a-85c0-54374c86cafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-brc88_calico-system(f24ce14a-f0f7-482a-85c0-54374c86cafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01767ebba47f966469a77510329f45407cd5100ef8722367ce39025cc86c2973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-brc88" podUID="f24ce14a-f0f7-482a-85c0-54374c86cafe" Aug 13 00:48:34.420535 kubelet[2784]: E0813 00:48:34.420506 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864f7c9fdc-bx7pf" Aug 13 00:48:34.420702 kubelet[2784]: E0813 00:48:34.420537 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864f7c9fdc-bx7pf" Aug 13 00:48:34.420928 kubelet[2784]: E0813 00:48:34.420863 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-864f7c9fdc-bx7pf_calico-system(21718867-e82f-4101-96f7-927efea081bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-864f7c9fdc-bx7pf_calico-system(21718867-e82f-4101-96f7-927efea081bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67424e83bdfe65889d83eb22ff7f32b623772b25e7ea95446595c40e1ededded\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-864f7c9fdc-bx7pf" podUID="21718867-e82f-4101-96f7-927efea081bd" Aug 13 00:48:34.425846 containerd[1579]: time="2025-08-13T00:48:34.425772985Z" level=error msg="Failed to destroy network for sandbox \"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.428025 containerd[1579]: time="2025-08-13T00:48:34.427952736Z" level=error msg="Failed to destroy network for sandbox \"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.431251 containerd[1579]: time="2025-08-13T00:48:34.431213034Z" level=error msg="Failed to destroy network for sandbox \"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.433245 containerd[1579]: time="2025-08-13T00:48:34.433203189Z" level=error msg="Failed to destroy network for sandbox \"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.460248 containerd[1579]: time="2025-08-13T00:48:34.460120506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7vwqp,Uid:e07603a2-f1fc-4a47-8272-69e765b2006f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.460422 kubelet[2784]: E0813 00:48:34.460379 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.460471 kubelet[2784]: E0813 00:48:34.460432 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:34.460471 kubelet[2784]: E0813 00:48:34.460450 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7vwqp" Aug 13 00:48:34.460529 kubelet[2784]: E0813 00:48:34.460502 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-7vwqp_calico-system(e07603a2-f1fc-4a47-8272-69e765b2006f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-7vwqp_calico-system(e07603a2-f1fc-4a47-8272-69e765b2006f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1096382e2667b01ea9ccf467bca09444aa544b2e0511e65531e35e909e5db7d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7vwqp" podUID="e07603a2-f1fc-4a47-8272-69e765b2006f" Aug 13 00:48:34.461813 containerd[1579]: time="2025-08-13T00:48:34.461780351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.462023 kubelet[2784]: E0813 00:48:34.461988 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.462103 kubelet[2784]: E0813 00:48:34.462057 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:34.462103 kubelet[2784]: E0813 00:48:34.462079 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:34.462192 kubelet[2784]: E0813 00:48:34.462135 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fc6ds_kube-system(aac7b78c-6f96-4a82-a13f-2a2f78994458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fc6ds_kube-system(aac7b78c-6f96-4a82-a13f-2a2f78994458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85a539250251e63453c4fbd5ea587bf5d1fa3eb4a660996a9b24d92739a1ef5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fc6ds" podUID="aac7b78c-6f96-4a82-a13f-2a2f78994458" Aug 13 00:48:34.463297 containerd[1579]: time="2025-08-13T00:48:34.463255610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf97cf4d8-hhpm6,Uid:9c143e7a-a5e8-41db-8501-2d62ae8b235b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.463471 kubelet[2784]: E0813 00:48:34.463442 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.463534 kubelet[2784]: E0813 00:48:34.463479 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" Aug 13 00:48:34.463534 kubelet[2784]: E0813 00:48:34.463503 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" Aug 13 00:48:34.463609 kubelet[2784]: E0813 00:48:34.463546 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bf97cf4d8-hhpm6_calico-system(9c143e7a-a5e8-41db-8501-2d62ae8b235b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bf97cf4d8-hhpm6_calico-system(9c143e7a-a5e8-41db-8501-2d62ae8b235b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4ffe515a45924653175bffae6ddc14a39487a32a121f0f826d2cee5ef464925\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" podUID="9c143e7a-a5e8-41db-8501-2d62ae8b235b" Aug 13 00:48:34.464498 containerd[1579]: time="2025-08-13T00:48:34.464456304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-t446f,Uid:b945f367-f37a-44ef-8f01-cfbf0e613602,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.464628 kubelet[2784]: E0813 00:48:34.464604 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.464666 kubelet[2784]: E0813 00:48:34.464640 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794868555d-t446f" Aug 13 00:48:34.464692 kubelet[2784]: E0813 00:48:34.464659 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794868555d-t446f" Aug 13 00:48:34.464738 kubelet[2784]: E0813 00:48:34.464699 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794868555d-t446f_calico-apiserver(b945f367-f37a-44ef-8f01-cfbf0e613602)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794868555d-t446f_calico-apiserver(b945f367-f37a-44ef-8f01-cfbf0e613602)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6c9badf8e6e127f216555744dfbbb4575d9c7983c733f74dc49037365b20a01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794868555d-t446f" podUID="b945f367-f37a-44ef-8f01-cfbf0e613602" Aug 13 00:48:34.465617 containerd[1579]: time="2025-08-13T00:48:34.465557701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddq4s,Uid:f938cdc6-5bc4-4598-81a5-977a60182bc5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.465843 kubelet[2784]: E0813 00:48:34.465804 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.465898 kubelet[2784]: E0813 00:48:34.465858 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddq4s" Aug 13 00:48:34.465898 kubelet[2784]: E0813 00:48:34.465888 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ddq4s" Aug 13 00:48:34.465976 kubelet[2784]: E0813 00:48:34.465930 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ddq4s_kube-system(f938cdc6-5bc4-4598-81a5-977a60182bc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ddq4s_kube-system(f938cdc6-5bc4-4598-81a5-977a60182bc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebccb6e21929fd9189e6522e6e0a0715a634bf780429c3c309aa3f8d8f39092b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ddq4s" podUID="f938cdc6-5bc4-4598-81a5-977a60182bc5" Aug 13 00:48:34.468105 containerd[1579]: time="2025-08-13T00:48:34.468051400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-mqg9c,Uid:f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.468234 kubelet[2784]: E0813 00:48:34.468205 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:34.468272 kubelet[2784]: E0813 00:48:34.468242 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" Aug 13 00:48:34.468272 kubelet[2784]: E0813 00:48:34.468260 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" Aug 13 00:48:34.468341 kubelet[2784]: E0813 00:48:34.468302 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-794868555d-mqg9c_calico-apiserver(f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-794868555d-mqg9c_calico-apiserver(f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61fb29380553b7051742f1e62c22554498d8b5ad2bac199555a26dfd2dc0b3fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" podUID="f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa" Aug 13 00:48:34.859141 systemd[1]: run-netns-cni\x2d3923c2e4\x2d38d4\x2d522b\x2df8e5\x2d524430e4cdae.mount: Deactivated successfully. Aug 13 00:48:43.250867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044282611.mount: Deactivated successfully. Aug 13 00:48:45.398891 kubelet[2784]: E0813 00:48:45.398844 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:45.399638 containerd[1579]: time="2025-08-13T00:48:45.399589211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:45.415635 containerd[1579]: time="2025-08-13T00:48:45.415555732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:45.420393 containerd[1579]: time="2025-08-13T00:48:45.420347841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:48:45.429831 containerd[1579]: time="2025-08-13T00:48:45.429776618Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:45.433695 containerd[1579]: time="2025-08-13T00:48:45.433614457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:45.435662 containerd[1579]: time="2025-08-13T00:48:45.435479887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.533674205s" Aug 13 00:48:45.435662 containerd[1579]: time="2025-08-13T00:48:45.435517768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:48:45.479158 containerd[1579]: time="2025-08-13T00:48:45.479097147Z" level=error msg="Failed to destroy network for sandbox \"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:45.481881 containerd[1579]: time="2025-08-13T00:48:45.481834933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:45.482131 kubelet[2784]: E0813 00:48:45.482088 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:48:45.482201 kubelet[2784]: E0813 00:48:45.482166 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:45.482201 kubelet[2784]: E0813 00:48:45.482195 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fc6ds" Aug 13 00:48:45.482306 kubelet[2784]: E0813 00:48:45.482272 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fc6ds_kube-system(aac7b78c-6f96-4a82-a13f-2a2f78994458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fc6ds_kube-system(aac7b78c-6f96-4a82-a13f-2a2f78994458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51a357dbd581b76a01fdc1f9b03ed6821a254f79aa2c58497d48de7f44b47c3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fc6ds" podUID="aac7b78c-6f96-4a82-a13f-2a2f78994458" Aug 13 00:48:45.482593 systemd[1]: run-netns-cni\x2d1c5e551b\x2d2de4\x2d6f25\x2ddba2\x2d06ea3c883ef7.mount: Deactivated successfully. Aug 13 00:48:45.483956 containerd[1579]: time="2025-08-13T00:48:45.483910116Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:48:45.519757 containerd[1579]: time="2025-08-13T00:48:45.519699201Z" level=info msg="Container f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:45.532192 containerd[1579]: time="2025-08-13T00:48:45.532154476Z" level=info msg="CreateContainer within sandbox \"9fb5dc79c35e4467fef3f798ccec8a499ad5dabcb53c45c91754e5442a9c4a8e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\"" Aug 13 00:48:45.532695 containerd[1579]: time="2025-08-13T00:48:45.532667467Z" level=info msg="StartContainer for \"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\"" Aug 13 00:48:45.534668 containerd[1579]: time="2025-08-13T00:48:45.534634618Z" level=info msg="connecting to shim f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17" address="unix:///run/containerd/s/7c640c90355fed9fe199f8da5ff4f9b73c4233a42b59e34fb38fa6161d05eb57" protocol=ttrpc version=3 Aug 13 00:48:45.562469 systemd[1]: Started cri-containerd-f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17.scope - libcontainer container f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17. Aug 13 00:48:45.612195 containerd[1579]: time="2025-08-13T00:48:45.612127053Z" level=info msg="StartContainer for \"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\" returns successfully" Aug 13 00:48:45.711487 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:48:45.712296 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:48:45.811560 containerd[1579]: time="2025-08-13T00:48:45.811502665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7vwqp,Uid:e07603a2-f1fc-4a47-8272-69e765b2006f,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:45.811980 containerd[1579]: time="2025-08-13T00:48:45.811938943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-mqg9c,Uid:f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:48:45.917662 kubelet[2784]: I0813 00:48:45.916938 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21718867-e82f-4101-96f7-927efea081bd-whisker-backend-key-pair\") pod \"21718867-e82f-4101-96f7-927efea081bd\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " Aug 13 00:48:45.918042 kubelet[2784]: I0813 00:48:45.918026 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89ch\" (UniqueName: \"kubernetes.io/projected/21718867-e82f-4101-96f7-927efea081bd-kube-api-access-t89ch\") pod \"21718867-e82f-4101-96f7-927efea081bd\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " Aug 13 00:48:45.918242 kubelet[2784]: I0813 00:48:45.918227 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21718867-e82f-4101-96f7-927efea081bd-whisker-ca-bundle\") pod \"21718867-e82f-4101-96f7-927efea081bd\" (UID: \"21718867-e82f-4101-96f7-927efea081bd\") " Aug 13 00:48:45.919277 kubelet[2784]: I0813 00:48:45.918719 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21718867-e82f-4101-96f7-927efea081bd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "21718867-e82f-4101-96f7-927efea081bd" (UID: "21718867-e82f-4101-96f7-927efea081bd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:48:45.922225 kubelet[2784]: I0813 00:48:45.922175 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21718867-e82f-4101-96f7-927efea081bd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "21718867-e82f-4101-96f7-927efea081bd" (UID: "21718867-e82f-4101-96f7-927efea081bd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:48:45.924127 kubelet[2784]: I0813 00:48:45.924098 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21718867-e82f-4101-96f7-927efea081bd-kube-api-access-t89ch" (OuterVolumeSpecName: "kube-api-access-t89ch") pod "21718867-e82f-4101-96f7-927efea081bd" (UID: "21718867-e82f-4101-96f7-927efea081bd"). InnerVolumeSpecName "kube-api-access-t89ch". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:48:45.953815 systemd[1]: Removed slice kubepods-besteffort-pod21718867_e82f_4101_96f7_927efea081bd.slice - libcontainer container kubepods-besteffort-pod21718867_e82f_4101_96f7_927efea081bd.slice. Aug 13 00:48:45.957040 kubelet[2784]: I0813 00:48:45.956989 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tx7z2" podStartSLOduration=1.973533133 podStartE2EDuration="24.956972106s" podCreationTimestamp="2025-08-13 00:48:21 +0000 UTC" firstStartedPulling="2025-08-13 00:48:22.45847309 +0000 UTC m=+19.746725943" lastFinishedPulling="2025-08-13 00:48:45.441912063 +0000 UTC m=+42.730164916" observedRunningTime="2025-08-13 00:48:45.952853049 +0000 UTC m=+43.241105902" watchObservedRunningTime="2025-08-13 00:48:45.956972106 +0000 UTC m=+43.245224949" Aug 13 00:48:46.020547 kubelet[2784]: I0813 00:48:46.020296 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21718867-e82f-4101-96f7-927efea081bd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:48:46.020547 kubelet[2784]: I0813 00:48:46.020430 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t89ch\" (UniqueName: \"kubernetes.io/projected/21718867-e82f-4101-96f7-927efea081bd-kube-api-access-t89ch\") on node \"localhost\" DevicePath \"\"" Aug 13 00:48:46.020547 kubelet[2784]: I0813 00:48:46.020440 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21718867-e82f-4101-96f7-927efea081bd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:48:46.049253 systemd[1]: Created slice kubepods-besteffort-pod6e842d2c_0b8f_4c43_814a_f05b91f6adc5.slice - libcontainer container kubepods-besteffort-pod6e842d2c_0b8f_4c43_814a_f05b91f6adc5.slice. Aug 13 00:48:46.088052 systemd-networkd[1485]: cali47b42ca64ff: Link UP Aug 13 00:48:46.089582 systemd-networkd[1485]: cali47b42ca64ff: Gained carrier Aug 13 00:48:46.112973 containerd[1579]: 2025-08-13 00:48:45.861 [INFO][3917] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:48:46.112973 containerd[1579]: 2025-08-13 00:48:45.882 [INFO][3917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0 goldmane-768f4c5c69- calico-system e07603a2-f1fc-4a47-8272-69e765b2006f 843 0 2025-08-13 00:48:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-7vwqp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali47b42ca64ff [] [] }} ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-" Aug 13 00:48:46.112973 containerd[1579]: 2025-08-13 00:48:45.882 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.112973 containerd[1579]: 2025-08-13 00:48:45.987 [INFO][3945] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" HandleID="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Workload="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:45.988 [INFO][3945] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" HandleID="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Workload="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c1f60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-7vwqp", "timestamp":"2025-08-13 00:48:45.987629432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:45.988 [INFO][3945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:45.989 [INFO][3945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:45.989 [INFO][3945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.003 [INFO][3945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" host="localhost" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.014 [INFO][3945] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.037 [INFO][3945] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.041 [INFO][3945] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.046 [INFO][3945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.113236 containerd[1579]: 2025-08-13 00:48:46.047 [INFO][3945] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" host="localhost" Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.054 [INFO][3945] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.061 [INFO][3945] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" host="localhost" Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.068 [INFO][3945] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" host="localhost" Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.068 [INFO][3945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" host="localhost" Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.068 [INFO][3945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:46.113638 containerd[1579]: 2025-08-13 00:48:46.068 [INFO][3945] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" HandleID="k8s-pod-network.f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Workload="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.113766 containerd[1579]: 2025-08-13 00:48:46.074 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e07603a2-f1fc-4a47-8272-69e765b2006f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-7vwqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47b42ca64ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.113766 containerd[1579]: 2025-08-13 00:48:46.074 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.113842 containerd[1579]: 2025-08-13 00:48:46.074 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47b42ca64ff ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.113842 containerd[1579]: 2025-08-13 00:48:46.091 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.113889 containerd[1579]: 2025-08-13 00:48:46.092 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e07603a2-f1fc-4a47-8272-69e765b2006f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd", Pod:"goldmane-768f4c5c69-7vwqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47b42ca64ff", MAC:"4a:bd:32:27:be:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.113942 containerd[1579]: 2025-08-13 00:48:46.107 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" Namespace="calico-system" Pod="goldmane-768f4c5c69-7vwqp" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7vwqp-eth0" Aug 13 00:48:46.121770 kubelet[2784]: I0813 00:48:46.121467 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e842d2c-0b8f-4c43-814a-f05b91f6adc5-whisker-backend-key-pair\") pod \"whisker-64bfb87c69-p7ts7\" (UID: \"6e842d2c-0b8f-4c43-814a-f05b91f6adc5\") " pod="calico-system/whisker-64bfb87c69-p7ts7" Aug 13 00:48:46.121770 kubelet[2784]: I0813 00:48:46.121520 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7xc\" (UniqueName: \"kubernetes.io/projected/6e842d2c-0b8f-4c43-814a-f05b91f6adc5-kube-api-access-8f7xc\") pod \"whisker-64bfb87c69-p7ts7\" (UID: \"6e842d2c-0b8f-4c43-814a-f05b91f6adc5\") " pod="calico-system/whisker-64bfb87c69-p7ts7" Aug 13 00:48:46.121770 kubelet[2784]: I0813 00:48:46.121541 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e842d2c-0b8f-4c43-814a-f05b91f6adc5-whisker-ca-bundle\") pod \"whisker-64bfb87c69-p7ts7\" (UID: \"6e842d2c-0b8f-4c43-814a-f05b91f6adc5\") " pod="calico-system/whisker-64bfb87c69-p7ts7" Aug 13 00:48:46.152746 systemd-networkd[1485]: calib82b4caba0e: Link UP Aug 13 00:48:46.153675 systemd-networkd[1485]: calib82b4caba0e: Gained carrier Aug 13 00:48:46.170534 containerd[1579]: 2025-08-13 00:48:45.861 [INFO][3918] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:48:46.170534 containerd[1579]: 2025-08-13 00:48:45.882 [INFO][3918] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0 calico-apiserver-794868555d- calico-apiserver f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa 842 0 2025-08-13 00:48:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794868555d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-794868555d-mqg9c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib82b4caba0e [] [] }} ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-" Aug 13 00:48:46.170534 containerd[1579]: 2025-08-13 00:48:45.882 [INFO][3918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.170534 containerd[1579]: 2025-08-13 00:48:45.987 [INFO][3947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" HandleID="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Workload="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:45.988 [INFO][3947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" HandleID="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Workload="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00020ef60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-794868555d-mqg9c", "timestamp":"2025-08-13 00:48:45.987501862 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:45.988 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.069 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.069 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.102 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" host="localhost" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.114 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.126 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.128 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.130 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.170818 containerd[1579]: 2025-08-13 00:48:46.130 [INFO][3947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" host="localhost" Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.132 [INFO][3947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.136 [INFO][3947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" host="localhost" Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.142 [INFO][3947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" host="localhost" Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.142 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" host="localhost" Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.143 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:46.171029 containerd[1579]: 2025-08-13 00:48:46.143 [INFO][3947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" HandleID="k8s-pod-network.464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Workload="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.171164 containerd[1579]: 2025-08-13 00:48:46.146 [INFO][3918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0", GenerateName:"calico-apiserver-794868555d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794868555d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-794868555d-mqg9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib82b4caba0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.171228 containerd[1579]: 2025-08-13 00:48:46.146 [INFO][3918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.171228 containerd[1579]: 2025-08-13 00:48:46.147 [INFO][3918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib82b4caba0e ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.171228 containerd[1579]: 2025-08-13 00:48:46.154 [INFO][3918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.171396 containerd[1579]: 2025-08-13 00:48:46.155 [INFO][3918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0", GenerateName:"calico-apiserver-794868555d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794868555d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d", Pod:"calico-apiserver-794868555d-mqg9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib82b4caba0e", MAC:"a2:37:3d:99:65:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.171472 containerd[1579]: 2025-08-13 00:48:46.165 [INFO][3918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-mqg9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--mqg9c-eth0" Aug 13 00:48:46.173632 containerd[1579]: time="2025-08-13T00:48:46.173535481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\" id:\"9fca7a7b775ffcc2f8c29fc61146f0c997213770a5932ccd9a3272a40a3e96f7\" pid:3982 exit_status:1 exited_at:{seconds:1755046126 nanos:172552898}" Aug 13 00:48:46.234370 containerd[1579]: time="2025-08-13T00:48:46.234010591Z" level=info msg="connecting to shim f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd" address="unix:///run/containerd/s/5452fe1c4a5f6028756ebf6e18b13cc562cf76069358fffb5acd8c1995a351be" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:46.240184 containerd[1579]: time="2025-08-13T00:48:46.240082713Z" level=info msg="connecting to shim 464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d" address="unix:///run/containerd/s/965dbde65166e12c8bd113b692f5ab671d67e86a789c63371ed79022b28a79ab" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:46.257468 systemd[1]: Started cri-containerd-f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd.scope - libcontainer container f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd. Aug 13 00:48:46.262895 systemd[1]: Started cri-containerd-464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d.scope - libcontainer container 464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d. Aug 13 00:48:46.271435 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:46.277077 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:46.332019 containerd[1579]: time="2025-08-13T00:48:46.331975437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-mqg9c,Uid:f1de7ba4-0c6c-47ea-b4ec-b558b4aa3dfa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d\"" Aug 13 00:48:46.335615 containerd[1579]: time="2025-08-13T00:48:46.334442925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:48:46.333564 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:58068.service - OpenSSH per-connection server daemon (10.0.0.1:58068). Aug 13 00:48:46.359450 containerd[1579]: time="2025-08-13T00:48:46.359404937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64bfb87c69-p7ts7,Uid:6e842d2c-0b8f-4c43-814a-f05b91f6adc5,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:46.418028 systemd[1]: var-lib-kubelet-pods-21718867\x2de82f\x2d4101\x2d96f7\x2d927efea081bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt89ch.mount: Deactivated successfully. Aug 13 00:48:46.418139 systemd[1]: var-lib-kubelet-pods-21718867\x2de82f\x2d4101\x2d96f7\x2d927efea081bd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:48:46.421848 containerd[1579]: time="2025-08-13T00:48:46.421807420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7vwqp,Uid:e07603a2-f1fc-4a47-8272-69e765b2006f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd\"" Aug 13 00:48:46.490434 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 58068 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:46.492655 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:46.497623 systemd-logind[1558]: New session 10 of user core. Aug 13 00:48:46.503463 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:48:46.605117 systemd-networkd[1485]: cali53a84307608: Link UP Aug 13 00:48:46.605622 systemd-networkd[1485]: cali53a84307608: Gained carrier Aug 13 00:48:46.625803 containerd[1579]: 2025-08-13 00:48:46.471 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:48:46.625803 containerd[1579]: 2025-08-13 00:48:46.480 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64bfb87c69--p7ts7-eth0 whisker-64bfb87c69- calico-system 6e842d2c-0b8f-4c43-814a-f05b91f6adc5 947 0 2025-08-13 00:48:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64bfb87c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64bfb87c69-p7ts7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali53a84307608 [] [] }} ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-" Aug 13 00:48:46.625803 containerd[1579]: 2025-08-13 00:48:46.480 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.625803 containerd[1579]: 2025-08-13 00:48:46.506 [INFO][4121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" HandleID="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Workload="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.506 [INFO][4121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" HandleID="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Workload="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001355f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64bfb87c69-p7ts7", "timestamp":"2025-08-13 00:48:46.506485347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.506 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.506 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.506 [INFO][4121] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.514 [INFO][4121] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" host="localhost" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.517 [INFO][4121] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.520 [INFO][4121] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.522 [INFO][4121] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.524 [INFO][4121] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:46.626053 containerd[1579]: 2025-08-13 00:48:46.524 [INFO][4121] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" host="localhost" Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.525 [INFO][4121] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.592 [INFO][4121] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" host="localhost" Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.599 [INFO][4121] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" host="localhost" Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.599 [INFO][4121] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" host="localhost" Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.599 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:46.626346 containerd[1579]: 2025-08-13 00:48:46.599 [INFO][4121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" HandleID="k8s-pod-network.7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Workload="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.626494 containerd[1579]: 2025-08-13 00:48:46.602 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64bfb87c69--p7ts7-eth0", GenerateName:"whisker-64bfb87c69-", Namespace:"calico-system", SelfLink:"", UID:"6e842d2c-0b8f-4c43-814a-f05b91f6adc5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64bfb87c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64bfb87c69-p7ts7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53a84307608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.626494 containerd[1579]: 2025-08-13 00:48:46.602 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.626565 containerd[1579]: 2025-08-13 00:48:46.602 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53a84307608 ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.626565 containerd[1579]: 2025-08-13 00:48:46.605 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.626620 containerd[1579]: 2025-08-13 00:48:46.610 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64bfb87c69--p7ts7-eth0", GenerateName:"whisker-64bfb87c69-", Namespace:"calico-system", SelfLink:"", UID:"6e842d2c-0b8f-4c43-814a-f05b91f6adc5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64bfb87c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab", Pod:"whisker-64bfb87c69-p7ts7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53a84307608", MAC:"86:9a:77:4a:6d:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:46.626668 containerd[1579]: 2025-08-13 00:48:46.619 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" Namespace="calico-system" Pod="whisker-64bfb87c69-p7ts7" WorkloadEndpoint="localhost-k8s-whisker--64bfb87c69--p7ts7-eth0" Aug 13 00:48:46.655213 containerd[1579]: time="2025-08-13T00:48:46.655088175Z" level=info msg="connecting to shim 7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab" address="unix:///run/containerd/s/0aa542fae9921b176c540e49c0868d42dd81908e63f9f915ec516403096f61e0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:46.694485 systemd[1]: Started cri-containerd-7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab.scope - libcontainer container 7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab. Aug 13 00:48:46.701156 sshd[4128]: Connection closed by 10.0.0.1 port 58068 Aug 13 00:48:46.701502 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:46.706176 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:58068.service: Deactivated successfully. Aug 13 00:48:46.708557 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:48:46.709350 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:48:46.712777 systemd-logind[1558]: Removed session 10. Aug 13 00:48:46.718293 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:46.814214 kubelet[2784]: I0813 00:48:46.814156 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21718867-e82f-4101-96f7-927efea081bd" path="/var/lib/kubelet/pods/21718867-e82f-4101-96f7-927efea081bd/volumes" Aug 13 00:48:47.016892 containerd[1579]: time="2025-08-13T00:48:47.016823972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\" id:\"a8345c104f6b8eef80d2e49bdb498afeeecfe5a320e257d69b8f7452c2d8735d\" pid:4205 exit_status:1 exited_at:{seconds:1755046127 nanos:16450382}" Aug 13 00:48:47.311783 systemd-networkd[1485]: calib82b4caba0e: Gained IPv6LL Aug 13 00:48:47.375212 containerd[1579]: time="2025-08-13T00:48:47.375152083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64bfb87c69-p7ts7,Uid:6e842d2c-0b8f-4c43-814a-f05b91f6adc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab\"" Aug 13 00:48:47.759582 systemd-networkd[1485]: cali47b42ca64ff: Gained IPv6LL Aug 13 00:48:47.792393 systemd-networkd[1485]: vxlan.calico: Link UP Aug 13 00:48:47.792699 systemd-networkd[1485]: vxlan.calico: Gained carrier Aug 13 00:48:47.812070 containerd[1579]: time="2025-08-13T00:48:47.812021265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf97cf4d8-hhpm6,Uid:9c143e7a-a5e8-41db-8501-2d62ae8b235b,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:47.813080 containerd[1579]: time="2025-08-13T00:48:47.813033884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brc88,Uid:f24ce14a-f0f7-482a-85c0-54374c86cafe,Namespace:calico-system,Attempt:0,}" Aug 13 00:48:47.970238 systemd-networkd[1485]: cali04173cec393: Link UP Aug 13 00:48:47.971817 systemd-networkd[1485]: cali04173cec393: Gained carrier Aug 13 00:48:47.992002 containerd[1579]: 2025-08-13 00:48:47.868 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--brc88-eth0 csi-node-driver- calico-system f24ce14a-f0f7-482a-85c0-54374c86cafe 722 0 2025-08-13 00:48:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-brc88 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali04173cec393 [] [] }} ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-" Aug 13 00:48:47.992002 containerd[1579]: 2025-08-13 00:48:47.869 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992002 containerd[1579]: 2025-08-13 00:48:47.905 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" HandleID="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Workload="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.905 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" HandleID="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Workload="localhost-k8s-csi--node--driver--brc88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-brc88", "timestamp":"2025-08-13 00:48:47.905518507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.906 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.906 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.906 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.921 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" host="localhost" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.928 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.941 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.943 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.945 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:47.992238 containerd[1579]: 2025-08-13 00:48:47.945 [INFO][4413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" host="localhost" Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.947 [INFO][4413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.950 [INFO][4413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" host="localhost" Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" host="localhost" Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" host="localhost" Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:47.992479 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" HandleID="k8s-pod-network.f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Workload="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992827 containerd[1579]: 2025-08-13 00:48:47.963 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--brc88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f24ce14a-f0f7-482a-85c0-54374c86cafe", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-brc88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04173cec393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:47.992899 containerd[1579]: 2025-08-13 00:48:47.963 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992899 containerd[1579]: 2025-08-13 00:48:47.963 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04173cec393 ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992899 containerd[1579]: 2025-08-13 00:48:47.972 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:47.992977 containerd[1579]: 2025-08-13 00:48:47.973 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--brc88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f24ce14a-f0f7-482a-85c0-54374c86cafe", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d", Pod:"csi-node-driver-brc88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04173cec393", MAC:"e2:5b:d5:ae:37:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:47.993032 containerd[1579]: 2025-08-13 00:48:47.986 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" Namespace="calico-system" Pod="csi-node-driver-brc88" WorkloadEndpoint="localhost-k8s-csi--node--driver--brc88-eth0" Aug 13 00:48:48.018152 containerd[1579]: time="2025-08-13T00:48:48.017992080Z" level=info msg="connecting to shim f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d" address="unix:///run/containerd/s/b644cd5d0f360c0500155cf850b894c587427b8b8a73ea56f46f73569552afab" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:48.079535 systemd-networkd[1485]: cali490890f78db: Link UP Aug 13 00:48:48.080946 systemd-networkd[1485]: cali490890f78db: Gained carrier Aug 13 00:48:48.083226 systemd[1]: Started cri-containerd-f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d.scope - libcontainer container f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d. Aug 13 00:48:48.114384 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:48.241885 containerd[1579]: 2025-08-13 00:48:47.874 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0 calico-kube-controllers-6bf97cf4d8- calico-system 9c143e7a-a5e8-41db-8501-2d62ae8b235b 848 0 2025-08-13 00:48:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bf97cf4d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bf97cf4d8-hhpm6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali490890f78db [] [] }} ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-" Aug 13 00:48:48.241885 containerd[1579]: 2025-08-13 00:48:47.874 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.241885 containerd[1579]: 2025-08-13 00:48:47.935 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" HandleID="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Workload="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:47.939 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" HandleID="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Workload="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00067ca40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bf97cf4d8-hhpm6", "timestamp":"2025-08-13 00:48:47.93582072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:47.940 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:47.958 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.023 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" host="localhost" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.033 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.039 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.042 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.044 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:48.242154 containerd[1579]: 2025-08-13 00:48:48.044 [INFO][4418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" host="localhost" Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.046 [INFO][4418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295 Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.051 [INFO][4418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" host="localhost" Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.059 [INFO][4418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" host="localhost" Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.059 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" host="localhost" Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.059 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:48.242525 containerd[1579]: 2025-08-13 00:48:48.059 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" HandleID="k8s-pod-network.a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Workload="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.242729 containerd[1579]: 2025-08-13 00:48:48.075 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0", GenerateName:"calico-kube-controllers-6bf97cf4d8-", Namespace:"calico-system", SelfLink:"", UID:"9c143e7a-a5e8-41db-8501-2d62ae8b235b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf97cf4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bf97cf4d8-hhpm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali490890f78db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:48.242806 containerd[1579]: 2025-08-13 00:48:48.076 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.242806 containerd[1579]: 2025-08-13 00:48:48.076 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali490890f78db ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.242806 containerd[1579]: 2025-08-13 00:48:48.082 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.242899 containerd[1579]: 2025-08-13 00:48:48.088 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0", GenerateName:"calico-kube-controllers-6bf97cf4d8-", Namespace:"calico-system", SelfLink:"", UID:"9c143e7a-a5e8-41db-8501-2d62ae8b235b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf97cf4d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295", Pod:"calico-kube-controllers-6bf97cf4d8-hhpm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali490890f78db", MAC:"2a:60:f9:f1:91:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:48.242973 containerd[1579]: 2025-08-13 00:48:48.236 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" Namespace="calico-system" Pod="calico-kube-controllers-6bf97cf4d8-hhpm6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf97cf4d8--hhpm6-eth0" Aug 13 00:48:48.271504 systemd-networkd[1485]: cali53a84307608: Gained IPv6LL Aug 13 00:48:48.338716 containerd[1579]: time="2025-08-13T00:48:48.338648650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-brc88,Uid:f24ce14a-f0f7-482a-85c0-54374c86cafe,Namespace:calico-system,Attempt:0,} returns sandbox id \"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d\"" Aug 13 00:48:48.521850 containerd[1579]: time="2025-08-13T00:48:48.521678894Z" level=info msg="connecting to shim a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295" address="unix:///run/containerd/s/ceb2b2bca06f441a6f15f0d09d639ab615b265c53f9bbea554139351fdf7deb4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:48.555539 systemd[1]: Started cri-containerd-a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295.scope - libcontainer container a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295. Aug 13 00:48:48.570787 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:48.607313 containerd[1579]: time="2025-08-13T00:48:48.607257652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf97cf4d8-hhpm6,Uid:9c143e7a-a5e8-41db-8501-2d62ae8b235b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295\"" Aug 13 00:48:48.812372 containerd[1579]: time="2025-08-13T00:48:48.812178873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-t446f,Uid:b945f367-f37a-44ef-8f01-cfbf0e613602,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:48:49.039604 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Aug 13 00:48:49.226504 systemd-networkd[1485]: calib7bd92e69ec: Link UP Aug 13 00:48:49.231572 systemd-networkd[1485]: cali490890f78db: Gained IPv6LL Aug 13 00:48:49.235837 systemd-networkd[1485]: calib7bd92e69ec: Gained carrier Aug 13 00:48:49.275648 containerd[1579]: 2025-08-13 00:48:49.117 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--794868555d--t446f-eth0 calico-apiserver-794868555d- calico-apiserver b945f367-f37a-44ef-8f01-cfbf0e613602 847 0 2025-08-13 00:48:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:794868555d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-794868555d-t446f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib7bd92e69ec [] [] }} ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-" Aug 13 00:48:49.275648 containerd[1579]: 2025-08-13 00:48:49.121 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.275648 containerd[1579]: 2025-08-13 00:48:49.153 [INFO][4591] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" HandleID="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Workload="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.153 [INFO][4591] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" HandleID="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Workload="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-794868555d-t446f", "timestamp":"2025-08-13 00:48:49.153543366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.153 [INFO][4591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.153 [INFO][4591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.153 [INFO][4591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.164 [INFO][4591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" host="localhost" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.168 [INFO][4591] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.172 [INFO][4591] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.176 [INFO][4591] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.179 [INFO][4591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:49.276016 containerd[1579]: 2025-08-13 00:48:49.179 [INFO][4591] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" host="localhost" Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.180 [INFO][4591] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2 Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.185 [INFO][4591] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" host="localhost" Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.194 [INFO][4591] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" host="localhost" Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.194 [INFO][4591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" host="localhost" Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.194 [INFO][4591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:49.278001 containerd[1579]: 2025-08-13 00:48:49.194 [INFO][4591] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" HandleID="k8s-pod-network.00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Workload="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.278860 containerd[1579]: 2025-08-13 00:48:49.212 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--794868555d--t446f-eth0", GenerateName:"calico-apiserver-794868555d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b945f367-f37a-44ef-8f01-cfbf0e613602", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794868555d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-794868555d-t446f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7bd92e69ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:49.279068 containerd[1579]: 2025-08-13 00:48:49.213 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.279068 containerd[1579]: 2025-08-13 00:48:49.213 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7bd92e69ec ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.279068 containerd[1579]: 2025-08-13 00:48:49.239 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.279457 containerd[1579]: 2025-08-13 00:48:49.246 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--794868555d--t446f-eth0", GenerateName:"calico-apiserver-794868555d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b945f367-f37a-44ef-8f01-cfbf0e613602", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"794868555d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2", Pod:"calico-apiserver-794868555d-t446f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7bd92e69ec", MAC:"96:39:d4:00:41:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:49.279592 containerd[1579]: 2025-08-13 00:48:49.269 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" Namespace="calico-apiserver" Pod="calico-apiserver-794868555d-t446f" WorkloadEndpoint="localhost-k8s-calico--apiserver--794868555d--t446f-eth0" Aug 13 00:48:49.317376 containerd[1579]: time="2025-08-13T00:48:49.317194770Z" level=info msg="connecting to shim 00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2" address="unix:///run/containerd/s/910b25a8757a5294227e39958453fc19085be3d0c4626548864c99848fbfe13e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:49.374645 systemd[1]: Started cri-containerd-00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2.scope - libcontainer container 00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2. Aug 13 00:48:49.407948 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:49.463378 containerd[1579]: time="2025-08-13T00:48:49.463230513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-794868555d-t446f,Uid:b945f367-f37a-44ef-8f01-cfbf0e613602,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2\"" Aug 13 00:48:49.551589 systemd-networkd[1485]: cali04173cec393: Gained IPv6LL Aug 13 00:48:49.811661 kubelet[2784]: E0813 00:48:49.811313 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:49.813529 containerd[1579]: time="2025-08-13T00:48:49.813202027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddq4s,Uid:f938cdc6-5bc4-4598-81a5-977a60182bc5,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:49.999914 containerd[1579]: time="2025-08-13T00:48:49.999860717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:50.000706 containerd[1579]: time="2025-08-13T00:48:50.000658923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 00:48:50.002047 containerd[1579]: time="2025-08-13T00:48:50.001877960Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:50.005298 containerd[1579]: time="2025-08-13T00:48:50.005259737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:50.006050 containerd[1579]: time="2025-08-13T00:48:50.005987125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.671515534s" Aug 13 00:48:50.006050 containerd[1579]: time="2025-08-13T00:48:50.006043845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:48:50.007155 containerd[1579]: time="2025-08-13T00:48:50.007116214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:48:50.012631 containerd[1579]: time="2025-08-13T00:48:50.012519550Z" level=info msg="CreateContainer within sandbox \"464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:48:50.019238 systemd-networkd[1485]: calia1b4081bb73: Link UP Aug 13 00:48:50.019677 systemd-networkd[1485]: calia1b4081bb73: Gained carrier Aug 13 00:48:50.029679 containerd[1579]: time="2025-08-13T00:48:50.029602968Z" level=info msg="Container 5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:50.036923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617344018.mount: Deactivated successfully. Aug 13 00:48:50.048356 containerd[1579]: 2025-08-13 00:48:49.905 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0 coredns-674b8bbfcf- kube-system f938cdc6-5bc4-4598-81a5-977a60182bc5 840 0 2025-08-13 00:48:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ddq4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1b4081bb73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-" Aug 13 00:48:50.048356 containerd[1579]: 2025-08-13 00:48:49.906 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.048356 containerd[1579]: 2025-08-13 00:48:49.965 [INFO][4672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" HandleID="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Workload="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.965 [INFO][4672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" HandleID="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Workload="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ddq4s", "timestamp":"2025-08-13 00:48:49.96513031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.965 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.965 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.965 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.977 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" host="localhost" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.984 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.989 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.992 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.995 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:48:50.048660 containerd[1579]: 2025-08-13 00:48:49.995 [INFO][4672] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" host="localhost" Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:49.998 [INFO][4672] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1 Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:50.002 [INFO][4672] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" host="localhost" Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:50.011 [INFO][4672] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" host="localhost" Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:50.012 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" host="localhost" Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:50.012 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:48:50.048898 containerd[1579]: 2025-08-13 00:48:50.012 [INFO][4672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" HandleID="k8s-pod-network.a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Workload="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.049048 containerd[1579]: 2025-08-13 00:48:50.016 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f938cdc6-5bc4-4598-81a5-977a60182bc5", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ddq4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1b4081bb73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:50.049124 containerd[1579]: 2025-08-13 00:48:50.016 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.049124 containerd[1579]: 2025-08-13 00:48:50.016 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1b4081bb73 ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.049124 containerd[1579]: 2025-08-13 00:48:50.020 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.049233 containerd[1579]: 2025-08-13 00:48:50.021 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f938cdc6-5bc4-4598-81a5-977a60182bc5", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1", Pod:"coredns-674b8bbfcf-ddq4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1b4081bb73", MAC:"f6:1e:0a:8f:58:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:48:50.049233 containerd[1579]: 2025-08-13 00:48:50.035 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-ddq4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ddq4s-eth0" Aug 13 00:48:50.051070 containerd[1579]: time="2025-08-13T00:48:50.051032263Z" level=info msg="CreateContainer within sandbox \"464df218b68918a35ebc19f93c414fd97575dee4b254984877c97b1cf55eb45d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365\"" Aug 13 00:48:50.052187 containerd[1579]: time="2025-08-13T00:48:50.052111926Z" level=info msg="StartContainer for \"5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365\"" Aug 13 00:48:50.054906 containerd[1579]: time="2025-08-13T00:48:50.054856203Z" level=info msg="connecting to shim 5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365" address="unix:///run/containerd/s/965dbde65166e12c8bd113b692f5ab671d67e86a789c63371ed79022b28a79ab" protocol=ttrpc version=3 Aug 13 00:48:50.079684 containerd[1579]: time="2025-08-13T00:48:50.079490110Z" level=info msg="connecting to shim a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1" address="unix:///run/containerd/s/0a18aebb61d2e605a3c6333be2c37fa8dcbc655f157af9fbdf182bf140abe318" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:50.094494 systemd[1]: Started cri-containerd-5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365.scope - libcontainer container 5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365. Aug 13 00:48:50.155649 systemd[1]: Started cri-containerd-a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1.scope - libcontainer container a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1. Aug 13 00:48:50.187635 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:48:50.241411 containerd[1579]: time="2025-08-13T00:48:50.240661836Z" level=info msg="StartContainer for \"5d9af4bb9a620313dcef66256a2f7e43e13dcf46503dcb9624dfa5cc58eb8365\" returns successfully" Aug 13 00:48:50.249548 containerd[1579]: time="2025-08-13T00:48:50.249458973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ddq4s,Uid:f938cdc6-5bc4-4598-81a5-977a60182bc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1\"" Aug 13 00:48:50.253964 kubelet[2784]: E0813 00:48:50.253881 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:50.261854 containerd[1579]: time="2025-08-13T00:48:50.261219643Z" level=info msg="CreateContainer within sandbox \"a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:48:50.287004 containerd[1579]: time="2025-08-13T00:48:50.286933384Z" level=info msg="Container 59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:50.298893 containerd[1579]: time="2025-08-13T00:48:50.298725475Z" level=info msg="CreateContainer within sandbox \"a51f20a81a0c616a14d93d18e7874695dcd45e80825ad429cda26a86a717c8c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7\"" Aug 13 00:48:50.300602 containerd[1579]: time="2025-08-13T00:48:50.300544959Z" level=info msg="StartContainer for \"59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7\"" Aug 13 00:48:50.302569 containerd[1579]: time="2025-08-13T00:48:50.302522883Z" level=info msg="connecting to shim 59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7" address="unix:///run/containerd/s/0a18aebb61d2e605a3c6333be2c37fa8dcbc655f157af9fbdf182bf140abe318" protocol=ttrpc version=3 Aug 13 00:48:50.339487 systemd[1]: Started cri-containerd-59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7.scope - libcontainer container 59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7. Aug 13 00:48:50.399141 containerd[1579]: time="2025-08-13T00:48:50.399017516Z" level=info msg="StartContainer for \"59d7190d1931c4765c7386b62115f278cefd92c10b83e4f1153b22baaa547ab7\" returns successfully" Aug 13 00:48:50.960424 kubelet[2784]: E0813 00:48:50.960381 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:50.988970 kubelet[2784]: I0813 00:48:50.988431 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794868555d-mqg9c" podStartSLOduration=28.315349017 podStartE2EDuration="31.988303498s" podCreationTimestamp="2025-08-13 00:48:19 +0000 UTC" firstStartedPulling="2025-08-13 00:48:46.333986782 +0000 UTC m=+43.622239635" lastFinishedPulling="2025-08-13 00:48:50.006941263 +0000 UTC m=+47.295194116" observedRunningTime="2025-08-13 00:48:50.9715489 +0000 UTC m=+48.259801753" watchObservedRunningTime="2025-08-13 00:48:50.988303498 +0000 UTC m=+48.276556351" Aug 13 00:48:51.087746 systemd-networkd[1485]: calia1b4081bb73: Gained IPv6LL Aug 13 00:48:51.215864 systemd-networkd[1485]: calib7bd92e69ec: Gained IPv6LL Aug 13 00:48:51.724804 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Aug 13 00:48:51.962198 kubelet[2784]: E0813 00:48:51.962160 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:52.359061 sshd[4819]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:52.362715 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:52.370785 systemd-logind[1558]: New session 11 of user core. Aug 13 00:48:52.374599 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:48:52.556160 sshd[4825]: Connection closed by 10.0.0.1 port 47804 Aug 13 00:48:52.556708 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:52.562815 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:48:52.563772 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:47804.service: Deactivated successfully. Aug 13 00:48:52.567223 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:48:52.569741 systemd-logind[1558]: Removed session 11. Aug 13 00:48:52.703289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622725860.mount: Deactivated successfully. Aug 13 00:48:52.795652 kubelet[2784]: I0813 00:48:52.795578 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ddq4s" podStartSLOduration=43.79555427 podStartE2EDuration="43.79555427s" podCreationTimestamp="2025-08-13 00:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:50.992695324 +0000 UTC m=+48.280948167" watchObservedRunningTime="2025-08-13 00:48:52.79555427 +0000 UTC m=+50.083807123" Aug 13 00:48:52.963944 kubelet[2784]: E0813 00:48:52.963910 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:54.108310 containerd[1579]: time="2025-08-13T00:48:54.108251193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:54.110378 containerd[1579]: time="2025-08-13T00:48:54.110304396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 00:48:54.111954 containerd[1579]: time="2025-08-13T00:48:54.111664877Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:54.115096 containerd[1579]: time="2025-08-13T00:48:54.115039264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:54.115866 containerd[1579]: time="2025-08-13T00:48:54.115805090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.10865492s" Aug 13 00:48:54.116034 containerd[1579]: time="2025-08-13T00:48:54.115874605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:48:54.117971 containerd[1579]: time="2025-08-13T00:48:54.117854006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:48:54.124093 containerd[1579]: time="2025-08-13T00:48:54.124041713Z" level=info msg="CreateContainer within sandbox \"f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:48:54.136388 containerd[1579]: time="2025-08-13T00:48:54.136300069Z" level=info msg="Container b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:54.149262 containerd[1579]: time="2025-08-13T00:48:54.149172476Z" level=info msg="CreateContainer within sandbox \"f16640319ec86a0681cbf83b9c6dba42d4ac4243cef5da9786e1543228259acd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\"" Aug 13 00:48:54.150116 containerd[1579]: time="2025-08-13T00:48:54.150037665Z" level=info msg="StartContainer for \"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\"" Aug 13 00:48:54.151901 containerd[1579]: time="2025-08-13T00:48:54.151832898Z" level=info msg="connecting to shim b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af" address="unix:///run/containerd/s/5452fe1c4a5f6028756ebf6e18b13cc562cf76069358fffb5acd8c1995a351be" protocol=ttrpc version=3 Aug 13 00:48:54.191666 systemd[1]: Started cri-containerd-b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af.scope - libcontainer container b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af. Aug 13 00:48:54.360876 containerd[1579]: time="2025-08-13T00:48:54.360714381Z" level=info msg="StartContainer for \"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" returns successfully" Aug 13 00:48:54.982590 kubelet[2784]: I0813 00:48:54.982521 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-7vwqp" podStartSLOduration=26.288195169 podStartE2EDuration="33.982504932s" podCreationTimestamp="2025-08-13 00:48:21 +0000 UTC" firstStartedPulling="2025-08-13 00:48:46.423238109 +0000 UTC m=+43.711490962" lastFinishedPulling="2025-08-13 00:48:54.117547882 +0000 UTC m=+51.405800725" observedRunningTime="2025-08-13 00:48:54.98181312 +0000 UTC m=+52.270065973" watchObservedRunningTime="2025-08-13 00:48:54.982504932 +0000 UTC m=+52.270757785" Aug 13 00:48:55.067666 containerd[1579]: time="2025-08-13T00:48:55.067618449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"06a30c6d2b47f7cce99daae3242418d96071ef90866b9f31d4432857d84fcdac\" pid:4905 exit_status:1 exited_at:{seconds:1755046135 nanos:67093912}" Aug 13 00:48:56.071289 containerd[1579]: time="2025-08-13T00:48:56.071241389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"68e852fc2b275ac3fdb4ee04606b4695e257491f7947b17c2ee891b04823e463\" pid:4932 exit_status:1 exited_at:{seconds:1755046136 nanos:70835113}" Aug 13 00:48:56.105915 containerd[1579]: time="2025-08-13T00:48:56.105837330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:56.106587 containerd[1579]: time="2025-08-13T00:48:56.106549289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 00:48:56.107685 containerd[1579]: time="2025-08-13T00:48:56.107641084Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:56.109912 containerd[1579]: time="2025-08-13T00:48:56.109855051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:56.110584 containerd[1579]: time="2025-08-13T00:48:56.110548755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.992652016s" Aug 13 00:48:56.110626 containerd[1579]: time="2025-08-13T00:48:56.110582560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:48:56.111608 containerd[1579]: time="2025-08-13T00:48:56.111402599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:48:56.115912 containerd[1579]: time="2025-08-13T00:48:56.115865553Z" level=info msg="CreateContainer within sandbox \"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:48:56.124402 containerd[1579]: time="2025-08-13T00:48:56.124358754Z" level=info msg="Container c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:56.134237 containerd[1579]: time="2025-08-13T00:48:56.134190468Z" level=info msg="CreateContainer within sandbox \"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5\"" Aug 13 00:48:56.134789 containerd[1579]: time="2025-08-13T00:48:56.134749370Z" level=info msg="StartContainer for \"c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5\"" Aug 13 00:48:56.135805 containerd[1579]: time="2025-08-13T00:48:56.135771850Z" level=info msg="connecting to shim c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5" address="unix:///run/containerd/s/0aa542fae9921b176c540e49c0868d42dd81908e63f9f915ec516403096f61e0" protocol=ttrpc version=3 Aug 13 00:48:56.162458 systemd[1]: Started cri-containerd-c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5.scope - libcontainer container c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5. Aug 13 00:48:56.219815 containerd[1579]: time="2025-08-13T00:48:56.219765791Z" level=info msg="StartContainer for \"c70933a2075c07b4a725f94122909a2b634cdba7d481a23eb21f8cc52d9ab0f5\" returns successfully" Aug 13 00:48:57.055579 containerd[1579]: time="2025-08-13T00:48:57.055524578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"30f868b7f5b1efee479c2c67366d73b59f4e0fdffba4675a2c13da1581320518\" pid:4992 exit_status:1 exited_at:{seconds:1755046137 nanos:54739207}" Aug 13 00:48:57.578473 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:47812.service - OpenSSH per-connection server daemon (10.0.0.1:47812). Aug 13 00:48:57.827169 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 47812 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:48:57.828944 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:57.833838 systemd-logind[1558]: New session 12 of user core. Aug 13 00:48:57.844569 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:48:58.033760 containerd[1579]: time="2025-08-13T00:48:58.033695479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:58.034906 containerd[1579]: time="2025-08-13T00:48:58.034863388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:48:58.036461 containerd[1579]: time="2025-08-13T00:48:58.036406481Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:58.039773 containerd[1579]: time="2025-08-13T00:48:58.039697696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:58.040401 containerd[1579]: time="2025-08-13T00:48:58.040373082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.928938029s" Aug 13 00:48:58.040462 containerd[1579]: time="2025-08-13T00:48:58.040403730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:48:58.041796 containerd[1579]: time="2025-08-13T00:48:58.041758110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:48:58.047903 containerd[1579]: time="2025-08-13T00:48:58.047830171Z" level=info msg="CreateContainer within sandbox \"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:48:58.049642 sshd[5011]: Connection closed by 10.0.0.1 port 47812 Aug 13 00:48:58.050025 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:58.055286 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:47812.service: Deactivated successfully. Aug 13 00:48:58.057604 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:48:58.058471 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:48:58.059911 systemd-logind[1558]: Removed session 12. Aug 13 00:48:58.067803 containerd[1579]: time="2025-08-13T00:48:58.067736934Z" level=info msg="Container aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:58.087294 containerd[1579]: time="2025-08-13T00:48:58.087115245Z" level=info msg="CreateContainer within sandbox \"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d\"" Aug 13 00:48:58.088491 containerd[1579]: time="2025-08-13T00:48:58.088438795Z" level=info msg="StartContainer for \"aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d\"" Aug 13 00:48:58.090724 containerd[1579]: time="2025-08-13T00:48:58.090673146Z" level=info msg="connecting to shim aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d" address="unix:///run/containerd/s/b644cd5d0f360c0500155cf850b894c587427b8b8a73ea56f46f73569552afab" protocol=ttrpc version=3 Aug 13 00:48:58.116594 systemd[1]: Started cri-containerd-aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d.scope - libcontainer container aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d. Aug 13 00:48:58.181406 containerd[1579]: time="2025-08-13T00:48:58.181357590Z" level=info msg="StartContainer for \"aad349f2c66572462cb8d6d4e0be4a3b918ae5ae5ee2be1640d00a6016bbf23d\" returns successfully" Aug 13 00:48:59.810950 kubelet[2784]: E0813 00:48:59.810886 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:48:59.811591 containerd[1579]: time="2025-08-13T00:48:59.811526036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:00.519205 systemd-networkd[1485]: cali83e559ca037: Link UP Aug 13 00:49:00.519944 systemd-networkd[1485]: cali83e559ca037: Gained carrier Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.153 [INFO][5064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0 coredns-674b8bbfcf- kube-system aac7b78c-6f96-4a82-a13f-2a2f78994458 841 0 2025-08-13 00:48:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fc6ds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83e559ca037 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.154 [INFO][5064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.180 [INFO][5081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" HandleID="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Workload="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.180 [INFO][5081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" HandleID="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Workload="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fc6ds", "timestamp":"2025-08-13 00:49:00.180632397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.180 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.180 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.180 [INFO][5081] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.187 [INFO][5081] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.192 [INFO][5081] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.196 [INFO][5081] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.197 [INFO][5081] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.199 [INFO][5081] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.199 [INFO][5081] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.200 [INFO][5081] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4 Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.240 [INFO][5081] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.512 [INFO][5081] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.512 [INFO][5081] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" host="localhost" Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.512 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:49:00.791279 containerd[1579]: 2025-08-13 00:49:00.513 [INFO][5081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" HandleID="k8s-pod-network.0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Workload="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.516 [INFO][5064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"aac7b78c-6f96-4a82-a13f-2a2f78994458", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fc6ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83e559ca037", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.516 [INFO][5064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.516 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83e559ca037 ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.519 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.520 [INFO][5064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"aac7b78c-6f96-4a82-a13f-2a2f78994458", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4", Pod:"coredns-674b8bbfcf-fc6ds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83e559ca037", MAC:"ee:2d:bf:66:f3:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:49:00.792144 containerd[1579]: 2025-08-13 00:49:00.787 [INFO][5064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" Namespace="kube-system" Pod="coredns-674b8bbfcf-fc6ds" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fc6ds-eth0" Aug 13 00:49:01.786301 containerd[1579]: time="2025-08-13T00:49:01.786223652Z" level=info msg="connecting to shim 0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4" address="unix:///run/containerd/s/b58ec3bdfe85c4b4b3601e0fb16e61ec2788b3dd953975db664856b0e578ebe1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:01.837597 systemd[1]: Started cri-containerd-0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4.scope - libcontainer container 0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4. Aug 13 00:49:01.861976 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:49:01.916807 containerd[1579]: time="2025-08-13T00:49:01.916745922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fc6ds,Uid:aac7b78c-6f96-4a82-a13f-2a2f78994458,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4\"" Aug 13 00:49:01.917790 kubelet[2784]: E0813 00:49:01.917704 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:01.940295 containerd[1579]: time="2025-08-13T00:49:01.940210329Z" level=info msg="CreateContainer within sandbox \"0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:49:01.956731 containerd[1579]: time="2025-08-13T00:49:01.956674531Z" level=info msg="Container a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:01.967191 containerd[1579]: time="2025-08-13T00:49:01.967045529Z" level=info msg="CreateContainer within sandbox \"0f58b0662ec8685e7c4d303f24b83a03528662ff642c357ae3b0820e409ab0d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74\"" Aug 13 00:49:01.967992 containerd[1579]: time="2025-08-13T00:49:01.967826435Z" level=info msg="StartContainer for \"a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74\"" Aug 13 00:49:01.970009 containerd[1579]: time="2025-08-13T00:49:01.969523328Z" level=info msg="connecting to shim a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74" address="unix:///run/containerd/s/b58ec3bdfe85c4b4b3601e0fb16e61ec2788b3dd953975db664856b0e578ebe1" protocol=ttrpc version=3 Aug 13 00:49:01.999644 systemd[1]: Started cri-containerd-a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74.scope - libcontainer container a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74. Aug 13 00:49:02.126103 containerd[1579]: time="2025-08-13T00:49:02.125942723Z" level=info msg="StartContainer for \"a4102d420ea6d85675007536a07353bb249f8f74d5402be7b96260e1eca28b74\" returns successfully" Aug 13 00:49:02.161085 systemd-networkd[1485]: cali83e559ca037: Gained IPv6LL Aug 13 00:49:02.437252 containerd[1579]: time="2025-08-13T00:49:02.437027060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:02.438081 containerd[1579]: time="2025-08-13T00:49:02.437992792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 00:49:02.439907 containerd[1579]: time="2025-08-13T00:49:02.439843530Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:02.442335 containerd[1579]: time="2025-08-13T00:49:02.442285037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:02.442997 containerd[1579]: time="2025-08-13T00:49:02.442956250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.401151831s" Aug 13 00:49:02.443061 containerd[1579]: time="2025-08-13T00:49:02.443002670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:49:02.444343 containerd[1579]: time="2025-08-13T00:49:02.444100116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:49:02.461482 containerd[1579]: time="2025-08-13T00:49:02.461436706Z" level=info msg="CreateContainer within sandbox \"a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:49:02.749001 containerd[1579]: time="2025-08-13T00:49:02.748920928Z" level=info msg="Container 08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:02.758768 containerd[1579]: time="2025-08-13T00:49:02.758710730Z" level=info msg="CreateContainer within sandbox \"a9c8e6d6bcf4fdaeed16ceabb81fb7015f7876448afa7360e5c80a5c7e391295\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\"" Aug 13 00:49:02.759250 containerd[1579]: time="2025-08-13T00:49:02.759222046Z" level=info msg="StartContainer for \"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\"" Aug 13 00:49:02.760295 containerd[1579]: time="2025-08-13T00:49:02.760261019Z" level=info msg="connecting to shim 08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd" address="unix:///run/containerd/s/ceb2b2bca06f441a6f15f0d09d639ab615b265c53f9bbea554139351fdf7deb4" protocol=ttrpc version=3 Aug 13 00:49:02.795707 systemd[1]: Started cri-containerd-08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd.scope - libcontainer container 08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd. Aug 13 00:49:02.972183 containerd[1579]: time="2025-08-13T00:49:02.972127427Z" level=info msg="StartContainer for \"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\" returns successfully" Aug 13 00:49:02.979955 containerd[1579]: time="2025-08-13T00:49:02.979884161Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:02.980860 containerd[1579]: time="2025-08-13T00:49:02.980823241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:49:02.982752 containerd[1579]: time="2025-08-13T00:49:02.982659521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 538.525921ms" Aug 13 00:49:02.982752 containerd[1579]: time="2025-08-13T00:49:02.982701452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:49:02.984709 containerd[1579]: time="2025-08-13T00:49:02.984505019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:49:02.988913 containerd[1579]: time="2025-08-13T00:49:02.988847770Z" level=info msg="CreateContainer within sandbox \"00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:49:03.005440 containerd[1579]: time="2025-08-13T00:49:03.002668131Z" level=info msg="Container eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:03.013068 kubelet[2784]: E0813 00:49:03.011743 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:03.024508 kubelet[2784]: I0813 00:49:03.024404 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bf97cf4d8-hhpm6" podStartSLOduration=27.189166542 podStartE2EDuration="41.024379907s" podCreationTimestamp="2025-08-13 00:48:22 +0000 UTC" firstStartedPulling="2025-08-13 00:48:48.608751599 +0000 UTC m=+45.897004452" lastFinishedPulling="2025-08-13 00:49:02.443964964 +0000 UTC m=+59.732217817" observedRunningTime="2025-08-13 00:49:03.02336408 +0000 UTC m=+60.311616933" watchObservedRunningTime="2025-08-13 00:49:03.024379907 +0000 UTC m=+60.312632770" Aug 13 00:49:03.029240 containerd[1579]: time="2025-08-13T00:49:03.029178711Z" level=info msg="CreateContainer within sandbox \"00d9cbcd67f9db8a2cde49186d801c8aac0abd778897820163060fda1980fbc2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45\"" Aug 13 00:49:03.030016 containerd[1579]: time="2025-08-13T00:49:03.029962011Z" level=info msg="StartContainer for \"eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45\"" Aug 13 00:49:03.031155 containerd[1579]: time="2025-08-13T00:49:03.031114130Z" level=info msg="connecting to shim eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45" address="unix:///run/containerd/s/910b25a8757a5294227e39958453fc19085be3d0c4626548864c99848fbfe13e" protocol=ttrpc version=3 Aug 13 00:49:03.065883 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:39480.service - OpenSSH per-connection server daemon (10.0.0.1:39480). Aug 13 00:49:03.087378 systemd[1]: Started cri-containerd-eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45.scope - libcontainer container eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45. Aug 13 00:49:03.149547 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 39480 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:03.230642 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:03.236269 systemd-logind[1558]: New session 13 of user core. Aug 13 00:49:03.243515 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:49:03.376783 containerd[1579]: time="2025-08-13T00:49:03.376632436Z" level=info msg="StartContainer for \"eb6271f51dd5735583ea4a60e2f8c18b381fcf2327191e1eb13a1afc3ee47e45\" returns successfully" Aug 13 00:49:03.427417 sshd[5269]: Connection closed by 10.0.0.1 port 39480 Aug 13 00:49:03.428186 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:03.433304 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:39480.service: Deactivated successfully. Aug 13 00:49:03.436161 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:49:03.439029 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:49:03.441375 systemd-logind[1558]: Removed session 13. Aug 13 00:49:04.014362 kubelet[2784]: E0813 00:49:04.014312 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:04.057742 containerd[1579]: time="2025-08-13T00:49:04.057696895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\" id:\"941b6cc55f5a91fcb0d8101ba745d010ee11a0ed371736df25bebe4993339196\" pid:5306 exited_at:{seconds:1755046144 nanos:57410704}" Aug 13 00:49:04.073675 kubelet[2784]: I0813 00:49:04.073583 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-794868555d-t446f" podStartSLOduration=31.555798516 podStartE2EDuration="45.073562437s" podCreationTimestamp="2025-08-13 00:48:19 +0000 UTC" firstStartedPulling="2025-08-13 00:48:49.46580344 +0000 UTC m=+46.754056303" lastFinishedPulling="2025-08-13 00:49:02.983567371 +0000 UTC m=+60.271820224" observedRunningTime="2025-08-13 00:49:04.073495619 +0000 UTC m=+61.361748482" watchObservedRunningTime="2025-08-13 00:49:04.073562437 +0000 UTC m=+61.361815280" Aug 13 00:49:04.074152 kubelet[2784]: I0813 00:49:04.073772 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fc6ds" podStartSLOduration=56.073768294 podStartE2EDuration="56.073768294s" podCreationTimestamp="2025-08-13 00:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:03.041627783 +0000 UTC m=+60.329880626" watchObservedRunningTime="2025-08-13 00:49:04.073768294 +0000 UTC m=+61.362021137" Aug 13 00:49:05.017350 kubelet[2784]: E0813 00:49:05.016482 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:06.757753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666972743.mount: Deactivated successfully. Aug 13 00:49:07.208877 containerd[1579]: time="2025-08-13T00:49:07.208823002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:07.224877 containerd[1579]: time="2025-08-13T00:49:07.224812373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 00:49:07.239824 containerd[1579]: time="2025-08-13T00:49:07.239789869Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:07.268637 containerd[1579]: time="2025-08-13T00:49:07.268598711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:07.269189 containerd[1579]: time="2025-08-13T00:49:07.269158587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.28462327s" Aug 13 00:49:07.269240 containerd[1579]: time="2025-08-13T00:49:07.269190799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:49:07.270247 containerd[1579]: time="2025-08-13T00:49:07.270186923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:49:07.321497 containerd[1579]: time="2025-08-13T00:49:07.321445569Z" level=info msg="CreateContainer within sandbox \"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:49:07.463013 containerd[1579]: time="2025-08-13T00:49:07.462858313Z" level=info msg="Container 1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:07.562931 containerd[1579]: time="2025-08-13T00:49:07.562871132Z" level=info msg="CreateContainer within sandbox \"7e22163177e8c2a22458e0ecb5b4bd2c791515a83f25794e6b252177fcccfeab\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02\"" Aug 13 00:49:07.563602 containerd[1579]: time="2025-08-13T00:49:07.563553153Z" level=info msg="StartContainer for \"1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02\"" Aug 13 00:49:07.565083 containerd[1579]: time="2025-08-13T00:49:07.565047113Z" level=info msg="connecting to shim 1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02" address="unix:///run/containerd/s/0aa542fae9921b176c540e49c0868d42dd81908e63f9f915ec516403096f61e0" protocol=ttrpc version=3 Aug 13 00:49:07.592537 systemd[1]: Started cri-containerd-1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02.scope - libcontainer container 1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02. Aug 13 00:49:08.068207 containerd[1579]: time="2025-08-13T00:49:08.068146406Z" level=info msg="StartContainer for \"1e16373a14008239a9d30f2c40f35b8ac0345f1a95d8c348d7eef080ac376c02\" returns successfully" Aug 13 00:49:08.441877 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:39484.service - OpenSSH per-connection server daemon (10.0.0.1:39484). Aug 13 00:49:08.520519 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 39484 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:08.522370 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:08.527139 systemd-logind[1558]: New session 14 of user core. Aug 13 00:49:08.540601 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:49:08.789382 sshd[5376]: Connection closed by 10.0.0.1 port 39484 Aug 13 00:49:08.789799 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:08.801453 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:39484.service: Deactivated successfully. Aug 13 00:49:08.803700 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:49:08.804634 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:49:08.808877 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). Aug 13 00:49:08.809632 systemd-logind[1558]: Removed session 14. Aug 13 00:49:08.866102 sshd[5395]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:08.868237 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:08.874122 systemd-logind[1558]: New session 15 of user core. Aug 13 00:49:08.884590 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:49:09.683420 sshd[5397]: Connection closed by 10.0.0.1 port 39486 Aug 13 00:49:09.683728 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:09.701719 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:39486.service: Deactivated successfully. Aug 13 00:49:09.704065 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:49:09.705245 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:49:09.709265 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:39488.service - OpenSSH per-connection server daemon (10.0.0.1:39488). Aug 13 00:49:09.710401 systemd-logind[1558]: Removed session 15. Aug 13 00:49:09.764182 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 39488 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:09.766191 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:09.772062 systemd-logind[1558]: New session 16 of user core. Aug 13 00:49:09.781504 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:49:10.069917 sshd[5413]: Connection closed by 10.0.0.1 port 39488 Aug 13 00:49:10.070268 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:10.074740 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:39488.service: Deactivated successfully. Aug 13 00:49:10.077626 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:49:10.078895 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:49:10.080177 systemd-logind[1558]: Removed session 16. Aug 13 00:49:12.070536 containerd[1579]: time="2025-08-13T00:49:12.070405871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:12.085282 containerd[1579]: time="2025-08-13T00:49:12.085233759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:49:12.115504 containerd[1579]: time="2025-08-13T00:49:12.115437416Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:12.138002 containerd[1579]: time="2025-08-13T00:49:12.137945479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:12.138669 containerd[1579]: time="2025-08-13T00:49:12.138630581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.868385618s" Aug 13 00:49:12.138669 containerd[1579]: time="2025-08-13T00:49:12.138667332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:49:12.214551 containerd[1579]: time="2025-08-13T00:49:12.214473758Z" level=info msg="CreateContainer within sandbox \"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:49:12.590430 containerd[1579]: time="2025-08-13T00:49:12.590345453Z" level=info msg="Container 895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:13.211294 containerd[1579]: time="2025-08-13T00:49:13.211250168Z" level=info msg="CreateContainer within sandbox \"f450811790a2074f25b37643bf2d1b95bb4236ad1eceb238e2bcfc3711e7383d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881\"" Aug 13 00:49:13.212022 containerd[1579]: time="2025-08-13T00:49:13.211973683Z" level=info msg="StartContainer for \"895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881\"" Aug 13 00:49:13.213629 containerd[1579]: time="2025-08-13T00:49:13.213603163Z" level=info msg="connecting to shim 895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881" address="unix:///run/containerd/s/b644cd5d0f360c0500155cf850b894c587427b8b8a73ea56f46f73569552afab" protocol=ttrpc version=3 Aug 13 00:49:13.241498 systemd[1]: Started cri-containerd-895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881.scope - libcontainer container 895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881. Aug 13 00:49:13.323554 containerd[1579]: time="2025-08-13T00:49:13.323505454Z" level=info msg="StartContainer for \"895ce0542a58414f7b53743c6ff47148b357addcc0f4e64acc59afcc2687c881\" returns successfully" Aug 13 00:49:13.883456 kubelet[2784]: I0813 00:49:13.883408 2784 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:49:13.884842 kubelet[2784]: I0813 00:49:13.884462 2784 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:49:14.106416 kubelet[2784]: I0813 00:49:14.105784 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-brc88" podStartSLOduration=28.305999158 podStartE2EDuration="52.105761926s" podCreationTimestamp="2025-08-13 00:48:22 +0000 UTC" firstStartedPulling="2025-08-13 00:48:48.339856578 +0000 UTC m=+45.628109431" lastFinishedPulling="2025-08-13 00:49:12.139619336 +0000 UTC m=+69.427872199" observedRunningTime="2025-08-13 00:49:14.105265866 +0000 UTC m=+71.393518719" watchObservedRunningTime="2025-08-13 00:49:14.105761926 +0000 UTC m=+71.394014779" Aug 13 00:49:14.106707 kubelet[2784]: I0813 00:49:14.106469 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64bfb87c69-p7ts7" podStartSLOduration=8.215793495 podStartE2EDuration="28.106460513s" podCreationTimestamp="2025-08-13 00:48:46 +0000 UTC" firstStartedPulling="2025-08-13 00:48:47.379374836 +0000 UTC m=+44.667627689" lastFinishedPulling="2025-08-13 00:49:07.270041854 +0000 UTC m=+64.558294707" observedRunningTime="2025-08-13 00:49:09.228663269 +0000 UTC m=+66.516916122" watchObservedRunningTime="2025-08-13 00:49:14.106460513 +0000 UTC m=+71.394713366" Aug 13 00:49:15.088043 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:44282.service - OpenSSH per-connection server daemon (10.0.0.1:44282). Aug 13 00:49:15.158833 sshd[5473]: Accepted publickey for core from 10.0.0.1 port 44282 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:15.160909 sshd-session[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:15.168359 systemd-logind[1558]: New session 17 of user core. Aug 13 00:49:15.174494 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:49:15.382643 sshd[5475]: Connection closed by 10.0.0.1 port 44282 Aug 13 00:49:15.382864 sshd-session[5473]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:15.390065 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:44282.service: Deactivated successfully. Aug 13 00:49:15.392453 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:49:15.393446 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:49:15.395558 systemd-logind[1558]: Removed session 17. Aug 13 00:49:17.022411 containerd[1579]: time="2025-08-13T00:49:17.022348944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\" id:\"c960fec035f0d4387676fccbe3dd7ec82d8102e4de021919888b043bb1b9d5b5\" pid:5501 exit_status:1 exited_at:{seconds:1755046157 nanos:21914815}" Aug 13 00:49:20.399350 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:34378.service - OpenSSH per-connection server daemon (10.0.0.1:34378). Aug 13 00:49:20.460051 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 34378 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:20.462222 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:20.467467 systemd-logind[1558]: New session 18 of user core. Aug 13 00:49:20.475477 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:49:20.603601 sshd[5516]: Connection closed by 10.0.0.1 port 34378 Aug 13 00:49:20.603963 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:20.608498 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:34378.service: Deactivated successfully. Aug 13 00:49:20.610802 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:49:20.611726 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:49:20.613155 systemd-logind[1558]: Removed session 18. Aug 13 00:49:21.811333 kubelet[2784]: E0813 00:49:21.811269 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:25.617411 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Aug 13 00:49:25.688492 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:25.690038 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:25.694809 systemd-logind[1558]: New session 19 of user core. Aug 13 00:49:25.705478 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:49:25.957718 sshd[5533]: Connection closed by 10.0.0.1 port 34394 Aug 13 00:49:25.958059 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:25.962462 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:34394.service: Deactivated successfully. Aug 13 00:49:25.964718 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:49:25.965791 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:49:25.967291 systemd-logind[1558]: Removed session 19. Aug 13 00:49:27.184865 containerd[1579]: time="2025-08-13T00:49:27.184679839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"f75fd9b102d421b2170443afc3a0f59f43feb3b51dce694ea46d91c28d85f403\" pid:5557 exited_at:{seconds:1755046167 nanos:184053076}" Aug 13 00:49:27.812647 kubelet[2784]: E0813 00:49:27.812459 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:30.811085 kubelet[2784]: E0813 00:49:30.811037 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:30.972033 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:48986.service - OpenSSH per-connection server daemon (10.0.0.1:48986). Aug 13 00:49:31.071209 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 48986 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:31.073495 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:31.077902 systemd-logind[1558]: New session 20 of user core. Aug 13 00:49:31.087462 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:49:31.282208 sshd[5580]: Connection closed by 10.0.0.1 port 48986 Aug 13 00:49:31.282658 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:31.287237 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:48986.service: Deactivated successfully. Aug 13 00:49:31.289502 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:49:31.292036 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:49:31.293283 systemd-logind[1558]: Removed session 20. Aug 13 00:49:33.811069 kubelet[2784]: E0813 00:49:33.811007 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:34.076249 containerd[1579]: time="2025-08-13T00:49:34.076123122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\" id:\"a3de19cc47b8d4516e81049b3e312ffbe1d977d5270894d0ff22c6a69eb17925\" pid:5603 exited_at:{seconds:1755046174 nanos:75742829}" Aug 13 00:49:36.301221 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:48996.service - OpenSSH per-connection server daemon (10.0.0.1:48996). Aug 13 00:49:36.367644 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:36.369524 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:36.374637 systemd-logind[1558]: New session 21 of user core. Aug 13 00:49:36.382466 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:49:36.568715 sshd[5617]: Connection closed by 10.0.0.1 port 48996 Aug 13 00:49:36.568903 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:36.584104 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:48996.service: Deactivated successfully. Aug 13 00:49:36.586000 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:49:36.586823 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:49:36.590117 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:49004.service - OpenSSH per-connection server daemon (10.0.0.1:49004). Aug 13 00:49:36.591048 systemd-logind[1558]: Removed session 21. Aug 13 00:49:36.646400 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 49004 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:36.647900 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:36.652481 systemd-logind[1558]: New session 22 of user core. Aug 13 00:49:36.661483 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:49:37.955254 sshd[5632]: Connection closed by 10.0.0.1 port 49004 Aug 13 00:49:37.956129 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:37.965815 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:49004.service: Deactivated successfully. Aug 13 00:49:37.968188 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:49:37.968957 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:49:37.972535 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:49020.service - OpenSSH per-connection server daemon (10.0.0.1:49020). Aug 13 00:49:37.973548 systemd-logind[1558]: Removed session 22. Aug 13 00:49:38.036279 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 49020 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:38.038066 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:38.043632 systemd-logind[1558]: New session 23 of user core. Aug 13 00:49:38.052504 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:49:38.494720 containerd[1579]: time="2025-08-13T00:49:38.494677046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"8e5fd32f062f783492c8c5e5b1b32f063b1cb09577824a38a57f576a455ab540\" pid:5665 exited_at:{seconds:1755046178 nanos:494343292}" Aug 13 00:49:39.935680 sshd[5646]: Connection closed by 10.0.0.1 port 49020 Aug 13 00:49:39.936062 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:39.951495 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:49020.service: Deactivated successfully. Aug 13 00:49:39.953752 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:49:39.954522 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:49:39.958112 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:40662.service - OpenSSH per-connection server daemon (10.0.0.1:40662). Aug 13 00:49:39.959062 systemd-logind[1558]: Removed session 23. Aug 13 00:49:40.024387 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 40662 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:40.026397 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:40.032147 systemd-logind[1558]: New session 24 of user core. Aug 13 00:49:40.040522 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:49:40.432088 sshd[5694]: Connection closed by 10.0.0.1 port 40662 Aug 13 00:49:40.433254 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:40.443667 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:40662.service: Deactivated successfully. Aug 13 00:49:40.447639 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:49:40.451918 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:49:40.453723 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:40674.service - OpenSSH per-connection server daemon (10.0.0.1:40674). Aug 13 00:49:40.455140 systemd-logind[1558]: Removed session 24. Aug 13 00:49:40.515171 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 40674 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:40.516791 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:40.521922 systemd-logind[1558]: New session 25 of user core. Aug 13 00:49:40.529440 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:49:40.651475 sshd[5708]: Connection closed by 10.0.0.1 port 40674 Aug 13 00:49:40.651845 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:40.655943 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:40674.service: Deactivated successfully. Aug 13 00:49:40.657942 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:49:40.658669 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:49:40.660060 systemd-logind[1558]: Removed session 25. Aug 13 00:49:45.665871 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:40680.service - OpenSSH per-connection server daemon (10.0.0.1:40680). Aug 13 00:49:45.716912 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:45.718661 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:45.723236 systemd-logind[1558]: New session 26 of user core. Aug 13 00:49:45.729533 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:49:45.874582 sshd[5727]: Connection closed by 10.0.0.1 port 40680 Aug 13 00:49:45.874915 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:45.879408 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:40680.service: Deactivated successfully. Aug 13 00:49:45.881554 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:49:45.882426 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:49:45.883627 systemd-logind[1558]: Removed session 26. Aug 13 00:49:47.042971 containerd[1579]: time="2025-08-13T00:49:47.042915186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f21ad92c6d682cd2a701bd7f1d51a753486d5ef685c90ea6e15e7d2486cb7b17\" id:\"da3ce52994e4029d91420399202677b9ca21a83ec20690572b46dd67d1b2722c\" pid:5752 exited_at:{seconds:1755046187 nanos:42573780}" Aug 13 00:49:50.887694 systemd[1]: Started sshd@26-10.0.0.115:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Aug 13 00:49:50.960240 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:50.962239 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:50.967133 systemd-logind[1558]: New session 27 of user core. Aug 13 00:49:50.978598 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:49:51.163106 sshd[5767]: Connection closed by 10.0.0.1 port 49240 Aug 13 00:49:51.164936 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:51.169960 systemd[1]: sshd@26-10.0.0.115:22-10.0.0.1:49240.service: Deactivated successfully. Aug 13 00:49:51.172305 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:49:51.173162 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:49:51.174214 systemd-logind[1558]: Removed session 27. Aug 13 00:49:54.811555 kubelet[2784]: E0813 00:49:54.810994 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:49:56.185762 systemd[1]: Started sshd@27-10.0.0.115:22-10.0.0.1:49256.service - OpenSSH per-connection server daemon (10.0.0.1:49256). Aug 13 00:49:56.241584 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 49256 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:49:56.243155 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:56.251371 systemd-logind[1558]: New session 28 of user core. Aug 13 00:49:56.257457 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:49:56.399664 sshd[5786]: Connection closed by 10.0.0.1 port 49256 Aug 13 00:49:56.400593 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:56.405661 systemd[1]: sshd@27-10.0.0.115:22-10.0.0.1:49256.service: Deactivated successfully. Aug 13 00:49:56.408557 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:49:56.409495 systemd-logind[1558]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:49:56.411728 systemd-logind[1558]: Removed session 28. Aug 13 00:49:57.089802 containerd[1579]: time="2025-08-13T00:49:57.089754649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b546aa4028cbb1a5082c638267f5d5ba2a786238d93b91bdea96758821ea76af\" id:\"6eb0689481e3d83b2b6ebcf25c30abaf5f236d5e7215ec0023592cb254ee299a\" pid:5813 exited_at:{seconds:1755046197 nanos:89423191}" Aug 13 00:50:01.418923 systemd[1]: Started sshd@28-10.0.0.115:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). Aug 13 00:50:01.495505 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:ktlnSeXWRd6Gkwwt2WQG/TKz0mcgIXPUWS2WbZzKZ4g Aug 13 00:50:01.497364 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:01.502801 systemd-logind[1558]: New session 29 of user core. Aug 13 00:50:01.512533 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:50:01.645003 sshd[5828]: Connection closed by 10.0.0.1 port 51876 Aug 13 00:50:01.645499 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:01.650306 systemd[1]: sshd@28-10.0.0.115:22-10.0.0.1:51876.service: Deactivated successfully. Aug 13 00:50:01.652576 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:50:01.654705 systemd-logind[1558]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:50:01.656473 systemd-logind[1558]: Removed session 29. Aug 13 00:50:02.338545 containerd[1579]: time="2025-08-13T00:50:02.338483408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08d4c1f9735595beac3c29e5c0bda47b8b768f1f4770f692e4b1192b3c2651cd\" id:\"fda34a6743d5dc25453e47a0a83c5b45c93368eeebd5f5fcffe6d4670dfa6bf3\" pid:5853 exited_at:{seconds:1755046202 nanos:338099302}"