Aug 13 07:16:25.131140 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:16:25.131162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:25.131173 kernel: BIOS-provided physical RAM map: Aug 13 07:16:25.131179 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:16:25.131185 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:16:25.131192 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:16:25.131199 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 07:16:25.131205 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 07:16:25.131212 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:16:25.131220 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 07:16:25.131226 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:16:25.131233 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:16:25.131239 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 07:16:25.131246 kernel: NX (Execute Disable) protection: active Aug 13 07:16:25.131253 kernel: APIC: Static calls initialized Aug 13 07:16:25.131262 kernel: SMBIOS 2.8 present. Aug 13 07:16:25.131269 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 07:16:25.131276 kernel: Hypervisor detected: KVM Aug 13 07:16:25.131283 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:16:25.131290 kernel: kvm-clock: using sched offset of 2206519705 cycles Aug 13 07:16:25.131297 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:16:25.131304 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:16:25.131311 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:16:25.131318 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:16:25.131333 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 07:16:25.131344 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:16:25.131351 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:16:25.131358 kernel: Using GB pages for direct mapping Aug 13 07:16:25.131365 kernel: ACPI: Early table checksum verification disabled Aug 13 07:16:25.131384 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 07:16:25.131391 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131398 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131405 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131415 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 07:16:25.131422 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131429 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131436 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131443 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:25.131450 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 07:16:25.131457 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 07:16:25.131468 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 07:16:25.131477 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 07:16:25.131484 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 07:16:25.131492 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 07:16:25.131499 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 07:16:25.131506 kernel: No NUMA configuration found Aug 13 07:16:25.131513 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 07:16:25.131520 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 07:16:25.131530 kernel: Zone ranges: Aug 13 07:16:25.131537 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:16:25.131544 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 07:16:25.131552 kernel: Normal empty Aug 13 07:16:25.131559 kernel: Movable zone start for each node Aug 13 07:16:25.131566 kernel: Early memory node ranges Aug 13 07:16:25.131573 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:16:25.131580 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 07:16:25.131588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 07:16:25.131597 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:16:25.131604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:16:25.131612 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 07:16:25.131619 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:16:25.131626 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:16:25.131633 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:16:25.131641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:16:25.131648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:16:25.131655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:16:25.131665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:16:25.131672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:16:25.131679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:16:25.131686 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:16:25.131694 kernel: TSC deadline timer available Aug 13 07:16:25.131701 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:16:25.131708 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:16:25.131715 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:16:25.131722 kernel: kvm-guest: setup PV sched yield Aug 13 07:16:25.131730 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 07:16:25.131739 kernel: Booting paravirtualized kernel on KVM Aug 13 07:16:25.131747 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:16:25.131755 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:16:25.131762 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:16:25.131769 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:16:25.131776 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:16:25.131784 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:16:25.131791 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:16:25.131799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:25.131810 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:16:25.131817 kernel: random: crng init done Aug 13 07:16:25.131824 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:16:25.131832 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:16:25.131839 kernel: Fallback order for Node 0: 0 Aug 13 07:16:25.131846 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 07:16:25.131853 kernel: Policy zone: DMA32 Aug 13 07:16:25.131861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:16:25.131871 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136904K reserved, 0K cma-reserved) Aug 13 07:16:25.131878 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:16:25.131886 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:16:25.131893 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:16:25.131900 kernel: Dynamic Preempt: voluntary Aug 13 07:16:25.131907 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:16:25.131915 kernel: rcu: RCU event tracing is enabled. Aug 13 07:16:25.131923 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:16:25.131930 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:16:25.131940 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:16:25.131947 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:16:25.131954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:16:25.131962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:16:25.131969 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:16:25.131976 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:16:25.131983 kernel: Console: colour VGA+ 80x25 Aug 13 07:16:25.131991 kernel: printk: console [ttyS0] enabled Aug 13 07:16:25.131998 kernel: ACPI: Core revision 20230628 Aug 13 07:16:25.132008 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:16:25.132015 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:16:25.132022 kernel: x2apic enabled Aug 13 07:16:25.132029 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:16:25.132037 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:16:25.132044 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:16:25.132052 kernel: kvm-guest: setup PV IPIs Aug 13 07:16:25.132068 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:16:25.132076 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:16:25.132084 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:16:25.132091 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:16:25.132099 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:16:25.132109 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:16:25.132116 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:16:25.132124 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:16:25.132132 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:16:25.132139 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:16:25.132149 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:16:25.132157 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:16:25.132165 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:16:25.132172 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:16:25.132180 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:16:25.132188 kernel: x86/bugs: return thunk changed Aug 13 07:16:25.132195 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:16:25.132203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:16:25.132213 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:16:25.132220 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:16:25.132228 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:16:25.132236 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:16:25.132244 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:16:25.132251 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:16:25.132259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:16:25.132266 kernel: landlock: Up and running. Aug 13 07:16:25.132274 kernel: SELinux: Initializing. Aug 13 07:16:25.132284 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:16:25.132291 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:16:25.132299 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:16:25.132307 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:25.132315 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:25.132329 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:25.132337 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:16:25.132345 kernel: ... version: 0 Aug 13 07:16:25.132353 kernel: ... bit width: 48 Aug 13 07:16:25.132362 kernel: ... generic registers: 6 Aug 13 07:16:25.132380 kernel: ... value mask: 0000ffffffffffff Aug 13 07:16:25.132387 kernel: ... max period: 00007fffffffffff Aug 13 07:16:25.132395 kernel: ... fixed-purpose events: 0 Aug 13 07:16:25.132403 kernel: ... event mask: 000000000000003f Aug 13 07:16:25.132410 kernel: signal: max sigframe size: 1776 Aug 13 07:16:25.132418 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:16:25.132425 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:16:25.132433 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:16:25.132443 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:16:25.132450 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:16:25.132458 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:16:25.132465 kernel: smpboot: Max logical packages: 1 Aug 13 07:16:25.132473 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:16:25.132481 kernel: devtmpfs: initialized Aug 13 07:16:25.132488 kernel: x86/mm: Memory block size: 128MB Aug 13 07:16:25.132496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:16:25.132504 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:16:25.132514 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:16:25.132521 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:16:25.132529 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:16:25.132537 kernel: audit: type=2000 audit(1755069384.745:1): state=initialized audit_enabled=0 res=1 Aug 13 07:16:25.132544 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:16:25.132552 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:16:25.132559 kernel: cpuidle: using governor menu Aug 13 07:16:25.132567 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:16:25.132575 kernel: dca service started, version 1.12.1 Aug 13 07:16:25.132585 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:16:25.132592 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:16:25.132600 kernel: PCI: Using configuration type 1 for base access Aug 13 07:16:25.132607 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:16:25.132615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:16:25.132623 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:16:25.132630 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:16:25.132638 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:16:25.132645 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:16:25.132655 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:16:25.132663 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:16:25.132670 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:16:25.132678 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:16:25.132685 kernel: ACPI: Interpreter enabled Aug 13 07:16:25.132693 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:16:25.132701 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:16:25.132708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:16:25.132716 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:16:25.132726 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:16:25.132733 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:16:25.132911 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:16:25.133040 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:16:25.133161 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:16:25.133171 kernel: PCI host bridge to bus 0000:00 Aug 13 07:16:25.133295 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:16:25.133532 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:16:25.133644 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:16:25.133752 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:16:25.133859 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:16:25.133966 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 07:16:25.134075 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:16:25.134239 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:16:25.134422 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:16:25.134547 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 07:16:25.134666 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 07:16:25.134783 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 07:16:25.134901 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:16:25.135029 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:16:25.135155 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 07:16:25.135276 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 07:16:25.135424 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 07:16:25.135553 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:16:25.135673 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:16:25.135793 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 07:16:25.135912 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 07:16:25.136045 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:16:25.136165 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 07:16:25.136284 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 07:16:25.136435 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 07:16:25.136557 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 07:16:25.136685 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:16:25.136804 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:16:25.136938 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:16:25.137058 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 07:16:25.137178 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 07:16:25.137304 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:16:25.137463 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 07:16:25.137474 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:16:25.137483 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:16:25.137494 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:16:25.137502 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:16:25.137509 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:16:25.137517 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:16:25.137524 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:16:25.137532 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:16:25.137539 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:16:25.137547 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:16:25.137554 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:16:25.137564 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:16:25.137571 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:16:25.137579 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:16:25.137586 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:16:25.137593 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:16:25.137601 kernel: iommu: Default domain type: Translated Aug 13 07:16:25.137609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:16:25.137616 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:16:25.137623 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:16:25.137633 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:16:25.137641 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 07:16:25.137761 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:16:25.137879 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:16:25.137996 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:16:25.138006 kernel: vgaarb: loaded Aug 13 07:16:25.138013 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:16:25.138021 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:16:25.138032 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:16:25.138040 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:16:25.138048 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:16:25.138056 kernel: pnp: PnP ACPI init Aug 13 07:16:25.138187 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:16:25.138198 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:16:25.138206 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:16:25.138214 kernel: NET: Registered PF_INET protocol family Aug 13 07:16:25.138224 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:16:25.138232 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:16:25.138240 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:16:25.138248 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:16:25.138255 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:16:25.138263 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:16:25.138271 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:16:25.138278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:16:25.138286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:16:25.138296 kernel: NET: Registered PF_XDP protocol family Aug 13 07:16:25.138437 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:16:25.138548 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:16:25.138656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:16:25.138764 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:16:25.138872 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:16:25.138980 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 07:16:25.138990 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:16:25.139002 kernel: Initialise system trusted keyrings Aug 13 07:16:25.139010 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:16:25.139018 kernel: Key type asymmetric registered Aug 13 07:16:25.139025 kernel: Asymmetric key parser 'x509' registered Aug 13 07:16:25.139033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:16:25.139041 kernel: io scheduler mq-deadline registered Aug 13 07:16:25.139049 kernel: io scheduler kyber registered Aug 13 07:16:25.139056 kernel: io scheduler bfq registered Aug 13 07:16:25.139064 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:16:25.139072 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:16:25.139083 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:16:25.139091 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:16:25.139099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:16:25.139107 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:16:25.139115 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:16:25.139122 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:16:25.139130 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:16:25.139253 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:16:25.139267 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:16:25.139402 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:16:25.139519 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:16:24 UTC (1755069384) Aug 13 07:16:25.139631 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:16:25.139641 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:16:25.139648 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:16:25.139656 kernel: Segment Routing with IPv6 Aug 13 07:16:25.139663 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:16:25.139674 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:16:25.139682 kernel: Key type dns_resolver registered Aug 13 07:16:25.139690 kernel: IPI shorthand broadcast: enabled Aug 13 07:16:25.139697 kernel: sched_clock: Marking stable (632003081, 108655093)->(758604162, -17945988) Aug 13 07:16:25.139705 kernel: registered taskstats version 1 Aug 13 07:16:25.139712 kernel: Loading compiled-in X.509 certificates Aug 13 07:16:25.139720 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:16:25.139728 kernel: Key type .fscrypt registered Aug 13 07:16:25.139736 kernel: Key type fscrypt-provisioning registered Aug 13 07:16:25.139746 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:16:25.139753 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:16:25.139761 kernel: ima: No architecture policies found Aug 13 07:16:25.139768 kernel: clk: Disabling unused clocks Aug 13 07:16:25.139776 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:16:25.139783 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:16:25.139791 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:16:25.139799 kernel: Run /init as init process Aug 13 07:16:25.139807 kernel: with arguments: Aug 13 07:16:25.139816 kernel: /init Aug 13 07:16:25.139824 kernel: with environment: Aug 13 07:16:25.139831 kernel: HOME=/ Aug 13 07:16:25.139839 kernel: TERM=linux Aug 13 07:16:25.139847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:16:25.139856 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:16:25.139866 systemd[1]: Detected virtualization kvm. Aug 13 07:16:25.139874 systemd[1]: Detected architecture x86-64. Aug 13 07:16:25.139885 systemd[1]: Running in initrd. Aug 13 07:16:25.139892 systemd[1]: No hostname configured, using default hostname. Aug 13 07:16:25.139900 systemd[1]: Hostname set to . Aug 13 07:16:25.139909 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:16:25.139917 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:16:25.139925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:25.139934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:25.139943 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:16:25.139954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:16:25.139973 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:16:25.139984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:16:25.139994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:16:25.140005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:16:25.140014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:25.140022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:25.140031 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:25.140039 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:16:25.140047 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:16:25.140056 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:25.140064 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:16:25.140073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:16:25.140083 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:16:25.140092 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:16:25.140100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:25.140109 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:25.140117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:25.140126 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:25.140134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:16:25.140142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:16:25.140151 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:16:25.140162 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:16:25.140170 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:16:25.140178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:16:25.140187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:25.140195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:16:25.140204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:25.140214 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:16:25.140226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:16:25.140251 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:16:25.140272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:16:25.140281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:16:25.140292 systemd-journald[193]: Journal started Aug 13 07:16:25.140313 systemd-journald[193]: Runtime Journal (/run/log/journal/e93c8c02904a47739f0e4f70a64b44e2) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:16:25.145513 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:16:25.173581 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:16:25.179420 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:16:25.181477 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:16:25.182519 kernel: Bridge firewalling registered Aug 13 07:16:25.184930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:25.186455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:25.196509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:25.197262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:16:25.201006 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:16:25.205102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:25.214135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:25.216918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:25.231698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:16:25.232029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:25.236036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:16:25.252388 dracut-cmdline[230]: dracut-dracut-053 Aug 13 07:16:25.255384 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:25.264962 systemd-resolved[227]: Positive Trust Anchors: Aug 13 07:16:25.264978 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:16:25.265009 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:16:25.267636 systemd-resolved[227]: Defaulting to hostname 'linux'. Aug 13 07:16:25.268707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:16:25.274629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:25.342403 kernel: SCSI subsystem initialized Aug 13 07:16:25.373391 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:16:25.383385 kernel: iscsi: registered transport (tcp) Aug 13 07:16:25.406719 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:16:25.406745 kernel: QLogic iSCSI HBA Driver Aug 13 07:16:25.455806 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:16:25.465656 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:16:25.492790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:16:25.492863 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:16:25.493822 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:16:25.536486 kernel: raid6: avx2x4 gen() 22962 MB/s Aug 13 07:16:25.553421 kernel: raid6: avx2x2 gen() 28899 MB/s Aug 13 07:16:25.611709 kernel: raid6: avx2x1 gen() 24871 MB/s Aug 13 07:16:25.611797 kernel: raid6: using algorithm avx2x2 gen() 28899 MB/s Aug 13 07:16:25.652397 kernel: raid6: .... xor() 19970 MB/s, rmw enabled Aug 13 07:16:25.652439 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:16:25.672400 kernel: xor: automatically using best checksumming function avx Aug 13 07:16:25.826408 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:16:25.839489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:16:25.853132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:25.867630 systemd-udevd[412]: Using default interface naming scheme 'v255'. Aug 13 07:16:25.872531 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:25.880519 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:16:25.895272 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 13 07:16:25.928191 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:16:25.945547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:16:26.011447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:26.019532 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:16:26.036414 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:16:26.038731 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:16:26.045554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:16:26.047870 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:16:26.056482 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:16:26.056505 kernel: GPT:9289727 != 19775487 Aug 13 07:16:26.056515 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:16:26.056526 kernel: GPT:9289727 != 19775487 Aug 13 07:16:26.056541 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:16:26.056551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:26.051409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:26.054780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:16:26.070741 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:16:26.069563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:16:26.074536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:16:26.075775 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:26.078807 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:26.080286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:16:26.084579 kernel: libata version 3.00 loaded. Aug 13 07:16:26.081896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:26.086094 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:26.094391 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Aug 13 07:16:26.097005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:26.100920 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:16:26.106794 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (470) Aug 13 07:16:26.118867 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:16:26.118914 kernel: AES CTR mode by8 optimization enabled Aug 13 07:16:26.131187 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:16:26.168600 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:16:26.168803 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:16:26.168815 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:16:26.168953 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:16:26.169088 kernel: scsi host0: ahci Aug 13 07:16:26.169242 kernel: scsi host1: ahci Aug 13 07:16:26.169447 kernel: scsi host2: ahci Aug 13 07:16:26.169594 kernel: scsi host3: ahci Aug 13 07:16:26.169748 kernel: scsi host4: ahci Aug 13 07:16:26.169889 kernel: scsi host5: ahci Aug 13 07:16:26.170030 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 07:16:26.170041 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 07:16:26.170051 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 07:16:26.170061 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 07:16:26.170071 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 07:16:26.170084 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 07:16:26.174648 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:16:26.177226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:26.189476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:16:26.195573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:16:26.198062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:16:26.214551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:16:26.216569 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:26.254095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:26.333119 disk-uuid[553]: Primary Header is updated. Aug 13 07:16:26.333119 disk-uuid[553]: Secondary Entries is updated. Aug 13 07:16:26.333119 disk-uuid[553]: Secondary Header is updated. Aug 13 07:16:26.341413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:26.345411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:26.447410 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:26.447467 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:16:26.448411 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:26.449406 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:26.449480 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:16:26.450398 kernel: ata3.00: applying bridge limits Aug 13 07:16:26.451391 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:26.451408 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:26.452404 kernel: ata3.00: configured for UDMA/100 Aug 13 07:16:26.453407 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:16:26.500397 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:16:26.500641 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:16:26.514387 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:16:27.347218 disk-uuid[563]: The operation has completed successfully. Aug 13 07:16:27.348926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:27.376845 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:16:27.376966 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:16:27.402620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:16:27.408847 sh[593]: Success Aug 13 07:16:27.423417 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:16:27.462954 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:16:27.483427 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:16:27.486420 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:16:27.499128 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:16:27.499163 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:27.499174 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:16:27.500131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:16:27.500884 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:16:27.505904 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:16:27.508217 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:16:27.520569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:16:27.522557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:16:27.537993 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:27.538049 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:27.538063 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:27.541422 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:27.552165 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:16:27.554437 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:27.645707 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:27.652704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:27.660508 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:16:27.667623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:16:27.679264 systemd-networkd[771]: lo: Link UP Aug 13 07:16:27.679275 systemd-networkd[771]: lo: Gained carrier Aug 13 07:16:27.680820 systemd-networkd[771]: Enumeration completed Aug 13 07:16:27.681198 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:27.681201 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:27.812644 systemd-networkd[771]: eth0: Link UP Aug 13 07:16:27.812650 systemd-networkd[771]: eth0: Gained carrier Aug 13 07:16:27.812670 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:27.814005 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:27.818292 systemd[1]: Reached target network.target - Network. Aug 13 07:16:27.833485 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:27.945994 ignition[774]: Ignition 2.19.0 Aug 13 07:16:27.946021 ignition[774]: Stage: fetch-offline Aug 13 07:16:27.946110 ignition[774]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:27.946169 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:27.946398 ignition[774]: parsed url from cmdline: "" Aug 13 07:16:27.946404 ignition[774]: no config URL provided Aug 13 07:16:27.946411 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:16:27.946424 ignition[774]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:16:27.946496 ignition[774]: op(1): [started] loading QEMU firmware config module Aug 13 07:16:27.946505 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:16:27.961795 ignition[774]: op(1): [finished] loading QEMU firmware config module Aug 13 07:16:27.963348 ignition[774]: parsing config with SHA512: dd021759afd40fcaa0b859ff3fc297afb673e158c7aa7d32f36452420d9400fd1cbc2afb23911d16aa031af76493d95a831531cae27fa6e0d2a36f845805bfdf Aug 13 07:16:27.969050 unknown[774]: fetched base config from "system" Aug 13 07:16:27.969069 unknown[774]: fetched user config from "qemu" Aug 13 07:16:27.969480 ignition[774]: fetch-offline: fetch-offline passed Aug 13 07:16:27.969585 ignition[774]: Ignition finished successfully Aug 13 07:16:27.972747 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:27.975755 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:16:27.986580 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:16:28.022490 ignition[786]: Ignition 2.19.0 Aug 13 07:16:28.022501 ignition[786]: Stage: kargs Aug 13 07:16:28.022683 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:28.022695 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:28.023354 ignition[786]: kargs: kargs passed Aug 13 07:16:28.023412 ignition[786]: Ignition finished successfully Aug 13 07:16:28.027146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:16:28.042554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:16:28.057641 ignition[793]: Ignition 2.19.0 Aug 13 07:16:28.057655 ignition[793]: Stage: disks Aug 13 07:16:28.057857 ignition[793]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:28.057871 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:28.060588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:16:28.058710 ignition[793]: disks: disks passed Aug 13 07:16:28.063506 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:28.058765 ignition[793]: Ignition finished successfully Aug 13 07:16:28.065106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:16:28.067324 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:28.069703 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:28.071764 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:28.084514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:16:28.096466 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:16:28.103539 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:16:28.119550 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:16:28.254398 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:16:28.255563 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:16:28.256448 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:16:28.271609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:28.274822 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:16:28.276352 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:16:28.285015 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Aug 13 07:16:28.285043 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:28.285064 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:28.285077 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:28.276412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:16:28.276437 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:28.290458 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:28.283659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:16:28.285790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:16:28.291048 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:28.333657 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:16:28.338674 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:16:28.343620 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:16:28.347413 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:16:28.431568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:28.443611 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:16:28.447202 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:16:28.452391 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:28.497978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:16:28.499598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:16:28.511252 ignition[923]: INFO : Ignition 2.19.0 Aug 13 07:16:28.511252 ignition[923]: INFO : Stage: mount Aug 13 07:16:28.512866 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:28.512866 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:28.512866 ignition[923]: INFO : mount: mount passed Aug 13 07:16:28.512866 ignition[923]: INFO : Ignition finished successfully Aug 13 07:16:28.518075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:16:28.530472 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:16:28.538707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:28.551395 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Aug 13 07:16:28.551425 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:28.552659 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:28.552676 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:28.555399 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:28.557330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:28.588306 ignition[956]: INFO : Ignition 2.19.0 Aug 13 07:16:28.588306 ignition[956]: INFO : Stage: files Aug 13 07:16:28.590448 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:28.590448 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:28.590448 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:16:28.590448 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:16:28.590448 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:16:28.597507 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:16:28.597507 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:16:28.597507 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:16:28.597507 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:16:28.592960 unknown[956]: wrote ssh authorized keys file for user: core Aug 13 07:16:28.889175 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Aug 13 07:16:29.125863 systemd-networkd[771]: eth0: Gained IPv6LL Aug 13 07:16:29.673102 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:16:29.673102 ignition[956]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Aug 13 07:16:29.677739 ignition[956]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:29.680260 ignition[956]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:29.680260 ignition[956]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Aug 13 07:16:29.680260 ignition[956]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:29.708991 ignition[956]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:29.718855 ignition[956]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:29.721050 ignition[956]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:29.722984 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:29.725136 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:29.726947 ignition[956]: INFO : files: files passed Aug 13 07:16:29.727782 ignition[956]: INFO : Ignition finished successfully Aug 13 07:16:29.731266 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:16:29.739540 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:16:29.742058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:16:29.744953 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:16:29.745078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:16:29.769912 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:16:29.774098 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:29.774098 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:29.778736 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:29.782524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:29.784053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:16:29.796497 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:16:29.822273 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:16:29.823451 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:16:29.826162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:16:29.828213 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:16:29.830281 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:16:29.832622 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:16:29.851611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:29.865533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:16:29.874468 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:29.876854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:29.879669 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:16:29.881535 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:16:29.882575 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:29.885137 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:16:29.887212 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:16:29.889090 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:16:29.891288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:29.893644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:29.895875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:16:29.897905 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:16:29.900397 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:16:29.902605 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:16:29.904793 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:16:29.906810 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:16:29.908248 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:16:29.910901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:29.913108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:29.915443 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:16:29.916389 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:29.918895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:16:29.919885 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:16:29.922060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:16:29.923116 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:29.925525 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:16:29.927246 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:16:29.928319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:29.930975 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:16:29.932760 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:16:29.934661 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:16:29.935517 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:16:29.937431 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:16:29.938365 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:16:29.940399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:16:29.941544 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:29.943985 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:16:29.944944 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:16:29.964561 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:16:29.966480 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:16:29.967540 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:29.971444 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:16:29.973267 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:16:29.973439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:29.977320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:16:29.977550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:16:29.980778 ignition[1010]: INFO : Ignition 2.19.0 Aug 13 07:16:29.980778 ignition[1010]: INFO : Stage: umount Aug 13 07:16:29.982399 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:29.982399 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:29.982399 ignition[1010]: INFO : umount: umount passed Aug 13 07:16:29.982399 ignition[1010]: INFO : Ignition finished successfully Aug 13 07:16:29.983516 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:16:29.983676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:16:29.987239 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:16:29.987427 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:16:29.990881 systemd[1]: Stopped target network.target - Network. Aug 13 07:16:29.992810 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:16:29.992886 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:16:29.993688 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:16:29.993736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:16:29.993997 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:16:29.994041 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:16:29.994324 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:16:29.994427 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:16:29.995025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:16:30.001292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:16:30.006469 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:16:30.006619 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:16:30.007411 systemd-networkd[771]: eth0: DHCPv6 lease lost Aug 13 07:16:30.010841 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:16:30.011013 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:16:30.012325 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:16:30.012629 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:30.019557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:16:30.019621 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:16:30.019691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:30.020007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:16:30.020050 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:30.020305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:16:30.020347 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:30.020626 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:16:30.020667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:30.021028 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:30.037630 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:16:30.037775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:16:30.048585 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:16:30.048790 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:30.051434 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:16:30.051494 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:30.053450 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:16:30.053491 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:30.055498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:16:30.055552 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:16:30.057698 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:16:30.057747 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:16:30.059748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:16:30.059797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:30.074548 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:16:30.080066 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:16:30.080140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:30.083715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:16:30.083767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:30.087119 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:16:30.087704 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:16:30.087813 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:16:30.262846 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:16:30.263027 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:16:30.265881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:16:30.267495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:16:30.267574 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:30.278573 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:16:30.287439 systemd[1]: Switching root. Aug 13 07:16:30.320783 systemd-journald[193]: Journal stopped Aug 13 07:16:31.460944 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:16:31.461022 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:16:31.461045 kernel: SELinux: policy capability open_perms=1 Aug 13 07:16:31.461059 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:16:31.461078 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:16:31.461092 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:16:31.461107 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:16:31.461133 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:16:31.461158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:16:31.461171 kernel: audit: type=1403 audit(1755069390.706:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:16:31.461204 systemd[1]: Successfully loaded SELinux policy in 40.990ms. Aug 13 07:16:31.461232 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.067ms. Aug 13 07:16:31.461250 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:16:31.461265 systemd[1]: Detected virtualization kvm. Aug 13 07:16:31.461279 systemd[1]: Detected architecture x86-64. Aug 13 07:16:31.461292 systemd[1]: Detected first boot. Aug 13 07:16:31.461307 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:16:31.461321 zram_generator::config[1054]: No configuration found. Aug 13 07:16:31.461336 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:16:31.461350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:16:31.461364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:16:31.461397 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:16:31.461415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:16:31.461429 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:16:31.461443 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:16:31.461457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:16:31.461474 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:16:31.461488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:16:31.461502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:16:31.461516 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:16:31.461531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:31.461555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:31.461570 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:16:31.461583 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:16:31.461597 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:16:31.461615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:16:31.461629 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:16:31.461643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:31.461657 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:16:31.461671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:16:31.461685 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:16:31.461699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:16:31.461716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:31.461734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:16:31.461748 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:16:31.461762 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:16:31.461775 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:16:31.461794 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:16:31.461808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:31.461822 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:31.461836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:31.461851 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:16:31.461890 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:16:31.461904 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:16:31.461919 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:16:31.461934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:31.461962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:16:31.461978 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:16:31.461994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:16:31.462025 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:16:31.462045 systemd[1]: Reached target machines.target - Containers. Aug 13 07:16:31.462061 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:16:31.462077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:31.462094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:16:31.462110 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:16:31.462127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:31.462153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:31.462170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:31.462186 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:16:31.462206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:31.462223 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:16:31.462240 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:16:31.462256 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:16:31.462274 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:16:31.462291 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:16:31.462307 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:16:31.462323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:16:31.462342 kernel: fuse: init (API version 7.39) Aug 13 07:16:31.462359 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:16:31.462389 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:16:31.462407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:16:31.462423 kernel: loop: module loaded Aug 13 07:16:31.462439 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:16:31.462455 systemd[1]: Stopped verity-setup.service. Aug 13 07:16:31.462472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:31.462488 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:16:31.462532 systemd-journald[1124]: Collecting audit messages is disabled. Aug 13 07:16:31.462561 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:16:31.462578 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:16:31.462598 systemd-journald[1124]: Journal started Aug 13 07:16:31.462627 systemd-journald[1124]: Runtime Journal (/run/log/journal/e93c8c02904a47739f0e4f70a64b44e2) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:16:31.233034 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:16:31.252145 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:16:31.252635 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:16:31.467532 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:16:31.466493 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:16:31.468006 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:16:31.469611 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:16:31.470956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:31.472674 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:16:31.472853 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:16:31.474531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:31.474805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:31.476529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:31.476748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:31.478630 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:16:31.478831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:16:31.480854 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:31.481069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:31.482433 kernel: ACPI: bus type drm_connector registered Aug 13 07:16:31.483620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:31.485568 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:31.485796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:31.501108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:16:31.503006 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:16:31.519895 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:16:31.543485 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:16:31.546540 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:16:31.547847 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:16:31.547891 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:31.550251 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:16:31.552741 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:16:31.558519 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:16:31.582924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:31.584547 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:16:31.589081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:16:31.607256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:31.609073 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:16:31.610703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:31.612276 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:16:31.618558 systemd-journald[1124]: Time spent on flushing to /var/log/journal/e93c8c02904a47739f0e4f70a64b44e2 is 20.269ms for 930 entries. Aug 13 07:16:31.618558 systemd-journald[1124]: System Journal (/var/log/journal/e93c8c02904a47739f0e4f70a64b44e2) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:16:31.680716 systemd-journald[1124]: Received client request to flush runtime journal. Aug 13 07:16:31.680771 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:16:31.620204 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:16:31.623500 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:16:31.627343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:31.631738 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:16:31.633447 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:16:31.640777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:16:31.646556 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:16:31.651798 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:16:31.664719 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:16:31.668655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:16:31.674493 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:16:31.676609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:31.687269 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:16:31.693832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:16:31.694745 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:16:31.761014 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:16:31.913427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:16:31.949457 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:16:31.983415 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 07:16:31.991686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:16:32.023692 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:16:32.030520 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:16:32.030889 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:16:32.038243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:32.077414 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:16:32.093423 kernel: loop4: detected capacity change from 0 to 229808 Aug 13 07:16:32.271408 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:16:32.283951 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:16:32.284791 (sd-merge)[1193]: Merged extensions into '/usr'. Aug 13 07:16:32.290331 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:16:32.290349 systemd[1]: Reloading... Aug 13 07:16:32.370483 zram_generator::config[1219]: No configuration found. Aug 13 07:16:32.408828 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:16:32.488431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:32.538966 systemd[1]: Reloading finished in 248 ms. Aug 13 07:16:32.574408 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:16:32.575974 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:16:32.691696 systemd[1]: Starting ensure-sysext.service... Aug 13 07:16:32.693983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:16:32.699914 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:16:32.699928 systemd[1]: Reloading... Aug 13 07:16:32.723820 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:16:32.724205 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:16:32.725223 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:16:32.725540 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 13 07:16:32.725618 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 13 07:16:32.731926 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:32.731942 systemd-tmpfiles[1257]: Skipping /boot Aug 13 07:16:32.745007 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:32.745026 systemd-tmpfiles[1257]: Skipping /boot Aug 13 07:16:32.782421 zram_generator::config[1287]: No configuration found. Aug 13 07:16:32.881543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:32.930613 systemd[1]: Reloading finished in 230 ms. Aug 13 07:16:32.956624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:16:32.973985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:32.981540 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:32.984317 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:16:32.986824 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:16:32.990562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:16:32.995556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:33.000530 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:16:33.007708 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:16:33.010431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:33.011580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:33.012828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:33.021675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:33.024677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:33.025774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:33.025874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:33.026800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:33.026985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:33.030275 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Aug 13 07:16:33.031853 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:16:33.036848 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:33.037686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:33.039785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:33.040043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:33.046687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:16:33.054747 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:33.055150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:33.061303 augenrules[1354]: No rules Aug 13 07:16:33.065111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:33.072650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:33.075505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:33.080608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:33.082066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:33.086635 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:16:33.090481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:33.091692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:33.094070 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:16:33.096119 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:33.098097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:33.098491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:33.100478 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:33.100664 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:33.102535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:33.102699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:33.104875 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:33.105048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:33.119972 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:16:33.129955 systemd[1]: Finished ensure-sysext.service. Aug 13 07:16:33.131305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:16:33.148631 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:16:33.339874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:33.341977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:33.342063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:33.344311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:16:33.345620 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:16:33.354785 systemd-resolved[1327]: Positive Trust Anchors: Aug 13 07:16:33.354981 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:16:33.355027 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:16:33.363358 systemd-resolved[1327]: Defaulting to hostname 'linux'. Aug 13 07:16:33.366961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:16:33.369126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:33.383422 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1387) Aug 13 07:16:33.416409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:16:33.421402 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:16:33.433799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:16:33.445640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:16:33.458456 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:16:33.463529 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:16:33.463919 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:16:33.464202 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:16:33.465437 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:16:33.482411 systemd-networkd[1399]: lo: Link UP Aug 13 07:16:33.483045 systemd-networkd[1399]: lo: Gained carrier Aug 13 07:16:33.488676 systemd-networkd[1399]: Enumeration completed Aug 13 07:16:33.488809 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:33.489698 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:33.489711 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:33.490028 systemd[1]: Reached target network.target - Network. Aug 13 07:16:33.494436 systemd-networkd[1399]: eth0: Link UP Aug 13 07:16:33.494447 systemd-networkd[1399]: eth0: Gained carrier Aug 13 07:16:33.494460 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:33.499556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:16:33.500897 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:16:33.502437 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:16:33.552511 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:33.569200 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Aug 13 07:16:34.495441 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:16:34.449928 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:16:34.449968 systemd-timesyncd[1402]: Initial clock synchronization to Wed 2025-08-13 07:16:34.449838 UTC. Aug 13 07:16:34.450016 systemd-resolved[1327]: Clock change detected. Flushing caches. Aug 13 07:16:34.495206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:34.509140 kernel: kvm_amd: TSC scaling supported Aug 13 07:16:34.509188 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:16:34.509202 kernel: kvm_amd: Nested Paging enabled Aug 13 07:16:34.510088 kernel: kvm_amd: LBR virtualization supported Aug 13 07:16:34.510113 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:16:34.511060 kernel: kvm_amd: Virtual GIF supported Aug 13 07:16:34.532876 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:16:34.569399 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:16:34.596309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:34.608890 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:16:34.620853 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:34.700606 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:16:34.702129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:34.703257 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:34.704434 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:16:34.705706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:16:34.707154 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:16:34.708362 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:16:34.709605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:16:34.710846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:16:34.710872 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:34.711789 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:34.713561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:16:34.716333 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:16:34.725301 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:16:34.727599 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:16:34.729121 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:16:34.730293 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:34.731255 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:34.732221 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:34.732248 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:34.733194 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:16:34.735280 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:16:34.737581 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:34.739833 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:16:34.743640 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:16:34.745539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:16:34.748417 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:16:34.784792 jq[1428]: false Aug 13 07:16:34.785997 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:16:34.788937 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:16:34.795113 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:16:34.796981 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:16:34.799044 extend-filesystems[1429]: Found loop3 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found loop4 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found loop5 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found sr0 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda1 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda2 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda3 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found usr Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda4 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda6 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda7 Aug 13 07:16:34.799044 extend-filesystems[1429]: Found vda9 Aug 13 07:16:34.799044 extend-filesystems[1429]: Checking size of /dev/vda9 Aug 13 07:16:34.797712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:16:34.803500 dbus-daemon[1427]: [system] SELinux support is enabled Aug 13 07:16:34.805942 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:16:34.811883 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:16:34.815531 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:16:34.826483 update_engine[1441]: I20250813 07:16:34.823793 1441 main.cc:92] Flatcar Update Engine starting Aug 13 07:16:34.826483 update_engine[1441]: I20250813 07:16:34.825229 1441 update_check_scheduler.cc:74] Next update check in 10m53s Aug 13 07:16:34.820176 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:16:34.825144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:16:34.825372 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:16:34.825732 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:16:34.826026 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:16:34.827620 jq[1444]: true Aug 13 07:16:34.827634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:16:34.827882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:16:34.839540 jq[1448]: true Aug 13 07:16:34.840076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:16:34.840127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:16:34.841535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:16:34.841557 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:16:34.848618 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:16:34.850902 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:16:34.852161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:16:34.859397 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:16:34.859422 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:16:34.860057 systemd-logind[1436]: New seat seat0. Aug 13 07:16:34.863235 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:16:34.870101 extend-filesystems[1429]: Resized partition /dev/vda9 Aug 13 07:16:34.875413 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:16:34.895798 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:16:34.907046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1385) Aug 13 07:16:34.963690 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:16:34.986784 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:16:35.099495 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:16:35.099495 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:16:35.099495 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:16:35.105772 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Aug 13 07:16:35.101563 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:16:35.103505 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:16:35.108297 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:16:35.110571 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:16:35.112899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:16:35.125197 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:16:35.154856 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:16:35.163181 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:16:35.172000 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:16:35.172236 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:16:35.175056 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:16:35.199936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:16:35.212070 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:16:35.214583 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:16:35.216066 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:16:35.296773 containerd[1455]: time="2025-08-13T07:16:35.296645979Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:16:35.323849 containerd[1455]: time="2025-08-13T07:16:35.323779825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.325855 containerd[1455]: time="2025-08-13T07:16:35.325802087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:35.325897 containerd[1455]: time="2025-08-13T07:16:35.325858102Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:16:35.325897 containerd[1455]: time="2025-08-13T07:16:35.325885082Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:16:35.326192 containerd[1455]: time="2025-08-13T07:16:35.326159427Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:16:35.326216 containerd[1455]: time="2025-08-13T07:16:35.326192689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326327 containerd[1455]: time="2025-08-13T07:16:35.326294921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326327 containerd[1455]: time="2025-08-13T07:16:35.326320118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326653 containerd[1455]: time="2025-08-13T07:16:35.326617906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326653 containerd[1455]: time="2025-08-13T07:16:35.326645668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326692 containerd[1455]: time="2025-08-13T07:16:35.326664103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:35.326692 containerd[1455]: time="2025-08-13T07:16:35.326678019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.327035 containerd[1455]: time="2025-08-13T07:16:35.327003369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.327366 containerd[1455]: time="2025-08-13T07:16:35.327333418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:35.327534 containerd[1455]: time="2025-08-13T07:16:35.327500982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:35.327534 containerd[1455]: time="2025-08-13T07:16:35.327527021Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:16:35.327732 containerd[1455]: time="2025-08-13T07:16:35.327703401Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:16:35.327823 containerd[1455]: time="2025-08-13T07:16:35.327809971Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:16:35.335890 containerd[1455]: time="2025-08-13T07:16:35.333597944Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:16:35.336033 containerd[1455]: time="2025-08-13T07:16:35.335937340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:16:35.336033 containerd[1455]: time="2025-08-13T07:16:35.335973518Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:16:35.336082 containerd[1455]: time="2025-08-13T07:16:35.336034112Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:16:35.336082 containerd[1455]: time="2025-08-13T07:16:35.336055271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:16:35.336284 containerd[1455]: time="2025-08-13T07:16:35.336255336Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:16:35.336618 containerd[1455]: time="2025-08-13T07:16:35.336585636Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:16:35.336760 containerd[1455]: time="2025-08-13T07:16:35.336717824Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:16:35.336803 containerd[1455]: time="2025-08-13T07:16:35.336743261Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:16:35.336803 containerd[1455]: time="2025-08-13T07:16:35.336779860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:16:35.336846 containerd[1455]: time="2025-08-13T07:16:35.336811078Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336866 containerd[1455]: time="2025-08-13T07:16:35.336855832Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336885 containerd[1455]: time="2025-08-13T07:16:35.336873004Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336916 containerd[1455]: time="2025-08-13T07:16:35.336890477Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336916 containerd[1455]: time="2025-08-13T07:16:35.336908431Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336952 containerd[1455]: time="2025-08-13T07:16:35.336925803Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336952 containerd[1455]: time="2025-08-13T07:16:35.336942775Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.336993 containerd[1455]: time="2025-08-13T07:16:35.336958204Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:16:35.337013 containerd[1455]: time="2025-08-13T07:16:35.336988331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337033 containerd[1455]: time="2025-08-13T07:16:35.337014730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337063 containerd[1455]: time="2025-08-13T07:16:35.337029448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337063 containerd[1455]: time="2025-08-13T07:16:35.337046099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337063 containerd[1455]: time="2025-08-13T07:16:35.337061237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337141 containerd[1455]: time="2025-08-13T07:16:35.337079942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337141 containerd[1455]: time="2025-08-13T07:16:35.337095942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337141 containerd[1455]: time="2025-08-13T07:16:35.337111432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337141 containerd[1455]: time="2025-08-13T07:16:35.337124496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337234 containerd[1455]: time="2025-08-13T07:16:35.337143772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337234 containerd[1455]: time="2025-08-13T07:16:35.337160624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337234 containerd[1455]: time="2025-08-13T07:16:35.337176413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337234 containerd[1455]: time="2025-08-13T07:16:35.337191452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337234 containerd[1455]: time="2025-08-13T07:16:35.337215066Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:16:35.337401 containerd[1455]: time="2025-08-13T07:16:35.337377120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337476 containerd[1455]: time="2025-08-13T07:16:35.337413027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337476 containerd[1455]: time="2025-08-13T07:16:35.337428867Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:16:35.337536 containerd[1455]: time="2025-08-13T07:16:35.337517813Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:16:35.337558 containerd[1455]: time="2025-08-13T07:16:35.337544153Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:16:35.337579 containerd[1455]: time="2025-08-13T07:16:35.337557648Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:16:35.337579 containerd[1455]: time="2025-08-13T07:16:35.337571164Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:16:35.337625 containerd[1455]: time="2025-08-13T07:16:35.337581393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.337625 containerd[1455]: time="2025-08-13T07:16:35.337599226Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:16:35.337625 containerd[1455]: time="2025-08-13T07:16:35.337616478Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:16:35.337678 containerd[1455]: time="2025-08-13T07:16:35.337628090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:16:35.338024 containerd[1455]: time="2025-08-13T07:16:35.337961826Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:16:35.338024 containerd[1455]: time="2025-08-13T07:16:35.338019574Z" level=info msg="Connect containerd service" Aug 13 07:16:35.338280 containerd[1455]: time="2025-08-13T07:16:35.338058217Z" level=info msg="using legacy CRI server" Aug 13 07:16:35.338280 containerd[1455]: time="2025-08-13T07:16:35.338065871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:16:35.338280 containerd[1455]: time="2025-08-13T07:16:35.338156952Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:16:35.338899 containerd[1455]: time="2025-08-13T07:16:35.338868105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:16:35.339079 containerd[1455]: time="2025-08-13T07:16:35.339028787Z" level=info msg="Start subscribing containerd event" Aug 13 07:16:35.339132 containerd[1455]: time="2025-08-13T07:16:35.339113896Z" level=info msg="Start recovering state" Aug 13 07:16:35.339265 containerd[1455]: time="2025-08-13T07:16:35.339247647Z" level=info msg="Start event monitor" Aug 13 07:16:35.339291 containerd[1455]: time="2025-08-13T07:16:35.339246615Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:16:35.339291 containerd[1455]: time="2025-08-13T07:16:35.339278585Z" level=info msg="Start snapshots syncer" Aug 13 07:16:35.339326 containerd[1455]: time="2025-08-13T07:16:35.339295126Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:16:35.339346 containerd[1455]: time="2025-08-13T07:16:35.339322597Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:16:35.339386 containerd[1455]: time="2025-08-13T07:16:35.339325212Z" level=info msg="Start streaming server" Aug 13 07:16:35.339513 containerd[1455]: time="2025-08-13T07:16:35.339497295Z" level=info msg="containerd successfully booted in 0.044315s" Aug 13 07:16:35.339624 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:16:36.160053 systemd-networkd[1399]: eth0: Gained IPv6LL Aug 13 07:16:36.164081 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:16:36.165986 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:16:36.177138 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:16:36.179333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:36.181532 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:16:36.201552 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:16:36.201809 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:16:36.203737 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:16:36.204803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:16:37.575667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:37.577401 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:16:37.580625 systemd[1]: Startup finished in 847ms (kernel) + 5.923s (initrd) + 6.035s (userspace) = 12.805s. Aug 13 07:16:37.591762 (kubelet)[1532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:38.142418 kubelet[1532]: E0813 07:16:38.142330 1532 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:38.147234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:38.147503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:38.147884 systemd[1]: kubelet.service: Consumed 1.830s CPU time. Aug 13 07:16:39.587659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:16:39.602056 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). Aug 13 07:16:39.640611 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:39.642502 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:39.651519 systemd-logind[1436]: New session 1 of user core. Aug 13 07:16:39.652793 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:16:39.665063 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:16:39.679950 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:16:39.700243 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:16:39.703014 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:16:39.811231 systemd[1549]: Queued start job for default target default.target. Aug 13 07:16:39.823007 systemd[1549]: Created slice app.slice - User Application Slice. Aug 13 07:16:39.823032 systemd[1549]: Reached target paths.target - Paths. Aug 13 07:16:39.823045 systemd[1549]: Reached target timers.target - Timers. Aug 13 07:16:39.824613 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:16:39.837175 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:16:39.837360 systemd[1549]: Reached target sockets.target - Sockets. Aug 13 07:16:39.837381 systemd[1549]: Reached target basic.target - Basic System. Aug 13 07:16:39.837434 systemd[1549]: Reached target default.target - Main User Target. Aug 13 07:16:39.837471 systemd[1549]: Startup finished in 128ms. Aug 13 07:16:39.837699 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:16:39.839194 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:16:39.901376 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:56842.service - OpenSSH per-connection server daemon (10.0.0.1:56842). Aug 13 07:16:39.944031 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 56842 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:39.946069 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:39.950585 systemd-logind[1436]: New session 2 of user core. Aug 13 07:16:39.958931 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:16:40.014834 sshd[1560]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.026272 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:56842.service: Deactivated successfully. Aug 13 07:16:40.027770 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:16:40.029463 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:16:40.030768 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:34294.service - OpenSSH per-connection server daemon (10.0.0.1:34294). Aug 13 07:16:40.032152 systemd-logind[1436]: Removed session 2. Aug 13 07:16:40.074018 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 34294 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:40.076061 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.080479 systemd-logind[1436]: New session 3 of user core. Aug 13 07:16:40.090889 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:16:40.142865 sshd[1567]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.160584 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:34294.service: Deactivated successfully. Aug 13 07:16:40.162284 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:16:40.163885 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:16:40.165134 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:34310.service - OpenSSH per-connection server daemon (10.0.0.1:34310). Aug 13 07:16:40.165898 systemd-logind[1436]: Removed session 3. Aug 13 07:16:40.203127 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 34310 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:40.204706 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.208677 systemd-logind[1436]: New session 4 of user core. Aug 13 07:16:40.218886 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:16:40.273024 sshd[1574]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.283709 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:34310.service: Deactivated successfully. Aug 13 07:16:40.285472 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:16:40.287100 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:16:40.301257 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:34326.service - OpenSSH per-connection server daemon (10.0.0.1:34326). Aug 13 07:16:40.302696 systemd-logind[1436]: Removed session 4. Aug 13 07:16:40.336994 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 34326 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:40.338705 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.342919 systemd-logind[1436]: New session 5 of user core. Aug 13 07:16:40.352962 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:16:40.412854 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:16:40.413308 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:40.430856 sudo[1584]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:40.433085 sshd[1581]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.451027 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:34326.service: Deactivated successfully. Aug 13 07:16:40.453216 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:16:40.454942 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:16:40.467047 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:34338.service - OpenSSH per-connection server daemon (10.0.0.1:34338). Aug 13 07:16:40.468122 systemd-logind[1436]: Removed session 5. Aug 13 07:16:40.502322 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 34338 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:40.504092 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.510167 systemd-logind[1436]: New session 6 of user core. Aug 13 07:16:40.518913 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:16:40.579925 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:16:40.580649 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:40.587139 sudo[1593]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:40.598330 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:16:40.598916 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:40.619980 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:40.621770 auditctl[1596]: No rules Aug 13 07:16:40.623089 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:16:40.623382 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:40.625154 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:40.656975 augenrules[1614]: No rules Aug 13 07:16:40.658767 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:40.660132 sudo[1592]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:40.662174 sshd[1589]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.677616 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:34338.service: Deactivated successfully. Aug 13 07:16:40.679285 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:16:40.680554 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:16:40.690965 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:34342.service - OpenSSH per-connection server daemon (10.0.0.1:34342). Aug 13 07:16:40.691769 systemd-logind[1436]: Removed session 6. Aug 13 07:16:40.725675 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 34342 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:40.727178 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.730883 systemd-logind[1436]: New session 7 of user core. Aug 13 07:16:40.746868 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:16:40.799629 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:16:40.799985 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:40.819072 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:16:40.838435 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:16:40.838665 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:16:41.397047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:41.397278 systemd[1]: kubelet.service: Consumed 1.830s CPU time. Aug 13 07:16:41.409991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:41.433644 systemd[1]: Reloading requested from client PID 1669 ('systemctl') (unit session-7.scope)... Aug 13 07:16:41.433659 systemd[1]: Reloading... Aug 13 07:16:41.515792 zram_generator::config[1710]: No configuration found. Aug 13 07:16:41.769869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:41.848932 systemd[1]: Reloading finished in 414 ms. Aug 13 07:16:41.903402 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:16:41.903513 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:16:41.903800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:41.907482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:42.094741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:42.101129 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:16:42.238185 kubelet[1756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:42.238185 kubelet[1756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:16:42.238185 kubelet[1756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:42.238639 kubelet[1756]: I0813 07:16:42.238261 1756 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:16:42.725498 kubelet[1756]: I0813 07:16:42.725449 1756 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:16:42.725498 kubelet[1756]: I0813 07:16:42.725483 1756 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:16:42.725731 kubelet[1756]: I0813 07:16:42.725716 1756 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:16:42.754226 kubelet[1756]: I0813 07:16:42.754068 1756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:16:42.762947 kubelet[1756]: E0813 07:16:42.762906 1756 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:16:42.762947 kubelet[1756]: I0813 07:16:42.762940 1756 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:16:42.768457 kubelet[1756]: I0813 07:16:42.768414 1756 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:16:42.768825 kubelet[1756]: I0813 07:16:42.768781 1756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:16:42.769013 kubelet[1756]: I0813 07:16:42.768812 1756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:16:42.769126 kubelet[1756]: I0813 07:16:42.769020 1756 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:16:42.769126 kubelet[1756]: I0813 07:16:42.769033 1756 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:16:42.769246 kubelet[1756]: I0813 07:16:42.769220 1756 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:42.770963 kubelet[1756]: I0813 07:16:42.770934 1756 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:16:42.770963 kubelet[1756]: I0813 07:16:42.770957 1756 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:16:42.771083 kubelet[1756]: I0813 07:16:42.770994 1756 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:16:42.772864 kubelet[1756]: I0813 07:16:42.772779 1756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:16:42.772864 kubelet[1756]: E0813 07:16:42.772819 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:42.772864 kubelet[1756]: E0813 07:16:42.772831 1756 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:42.778939 kubelet[1756]: I0813 07:16:42.778903 1756 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:16:42.779811 kubelet[1756]: I0813 07:16:42.779570 1756 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:16:42.780278 kubelet[1756]: W0813 07:16:42.780245 1756 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:16:42.783278 kubelet[1756]: E0813 07:16:42.783222 1756 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.146\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:16:42.783392 kubelet[1756]: E0813 07:16:42.783303 1756 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:16:42.783975 kubelet[1756]: I0813 07:16:42.783948 1756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:16:42.784031 kubelet[1756]: I0813 07:16:42.784024 1756 server.go:1289] "Started kubelet" Aug 13 07:16:42.784594 kubelet[1756]: I0813 07:16:42.784145 1756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:16:42.786621 kubelet[1756]: I0813 07:16:42.786400 1756 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:16:42.786621 kubelet[1756]: I0813 07:16:42.786400 1756 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:16:42.786621 kubelet[1756]: I0813 07:16:42.786407 1756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:16:42.787481 kubelet[1756]: I0813 07:16:42.787461 1756 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:16:42.790072 kubelet[1756]: I0813 07:16:42.790049 1756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:16:42.792219 kubelet[1756]: I0813 07:16:42.791643 1756 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:16:42.792295 kubelet[1756]: I0813 07:16:42.792262 1756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:16:42.792337 kubelet[1756]: I0813 07:16:42.792323 1756 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:16:42.793895 kubelet[1756]: E0813 07:16:42.792566 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:42.793895 kubelet[1756]: E0813 07:16:42.793612 1756 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:16:42.794385 kubelet[1756]: I0813 07:16:42.794354 1756 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:16:42.794503 kubelet[1756]: I0813 07:16:42.794472 1756 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:16:42.795333 kubelet[1756]: E0813 07:16:42.795256 1756 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:16:42.798605 kubelet[1756]: I0813 07:16:42.798578 1756 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:16:42.808307 kubelet[1756]: I0813 07:16:42.808282 1756 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:16:42.808307 kubelet[1756]: I0813 07:16:42.808294 1756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:16:42.808307 kubelet[1756]: I0813 07:16:42.808313 1756 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:42.818886 kubelet[1756]: E0813 07:16:42.818848 1756 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.146\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 13 07:16:42.892955 kubelet[1756]: E0813 07:16:42.892907 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:42.993307 kubelet[1756]: E0813 07:16:42.993148 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.024131 kubelet[1756]: E0813 07:16:43.024096 1756 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.146\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Aug 13 07:16:43.093552 kubelet[1756]: E0813 07:16:43.093474 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.194227 kubelet[1756]: E0813 07:16:43.194141 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.295134 kubelet[1756]: E0813 07:16:43.295029 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.395622 kubelet[1756]: E0813 07:16:43.395530 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.429907 kubelet[1756]: E0813 07:16:43.429843 1756 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.146\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Aug 13 07:16:43.496093 kubelet[1756]: E0813 07:16:43.496012 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.596701 kubelet[1756]: E0813 07:16:43.596541 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.697060 kubelet[1756]: E0813 07:16:43.696987 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.730470 kubelet[1756]: E0813 07:16:42.812997 1756 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.146.185b4252b518934b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.146,UID:10.0.0.146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.146,},FirstTimestamp:2025-08-13 07:16:42.783978315 +0000 UTC m=+0.594669095,LastTimestamp:2025-08-13 07:16:42.783978315 +0000 UTC m=+0.594669095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.146,}" Aug 13 07:16:43.730821 kubelet[1756]: I0813 07:16:43.730802 1756 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 07:16:43.760918 kubelet[1756]: I0813 07:16:43.760868 1756 policy_none.go:49] "None policy: Start" Aug 13 07:16:43.760918 kubelet[1756]: I0813 07:16:43.760909 1756 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:16:43.760918 kubelet[1756]: I0813 07:16:43.760930 1756 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:16:43.773898 kubelet[1756]: E0813 07:16:43.773876 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:43.776215 kubelet[1756]: I0813 07:16:43.776158 1756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:16:43.777714 kubelet[1756]: I0813 07:16:43.777685 1756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:16:43.777766 kubelet[1756]: I0813 07:16:43.777720 1756 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:16:43.777800 kubelet[1756]: I0813 07:16:43.777780 1756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:16:43.777800 kubelet[1756]: I0813 07:16:43.777792 1756 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:16:43.778431 kubelet[1756]: E0813 07:16:43.777918 1756 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:16:43.797904 kubelet[1756]: E0813 07:16:43.797870 1756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Aug 13 07:16:43.819844 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:16:43.836386 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:16:43.839634 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:16:43.849677 kubelet[1756]: E0813 07:16:43.849587 1756 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:16:43.850219 kubelet[1756]: I0813 07:16:43.850190 1756 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:16:43.850281 kubelet[1756]: I0813 07:16:43.850229 1756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:16:43.850643 kubelet[1756]: I0813 07:16:43.850475 1756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:16:43.851784 kubelet[1756]: E0813 07:16:43.851758 1756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:16:43.851859 kubelet[1756]: E0813 07:16:43.851815 1756 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.146\" not found" Aug 13 07:16:43.951176 kubelet[1756]: I0813 07:16:43.951109 1756 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.146" Aug 13 07:16:43.970934 kubelet[1756]: I0813 07:16:43.970879 1756 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.146" Aug 13 07:16:43.970934 kubelet[1756]: E0813 07:16:43.970917 1756 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.146\": node \"10.0.0.146\" not found" Aug 13 07:16:43.991167 kubelet[1756]: I0813 07:16:43.991145 1756 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 07:16:43.991553 containerd[1455]: time="2025-08-13T07:16:43.991473087Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:16:43.991937 kubelet[1756]: I0813 07:16:43.991728 1756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 07:16:44.126432 sudo[1626]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:44.128203 sshd[1622]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:44.132045 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:34342.service: Deactivated successfully. Aug 13 07:16:44.134055 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:16:44.134714 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:16:44.135586 systemd-logind[1436]: Removed session 7. Aug 13 07:16:44.774259 kubelet[1756]: E0813 07:16:44.774193 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:44.776359 kubelet[1756]: I0813 07:16:44.776280 1756 apiserver.go:52] "Watching apiserver" Aug 13 07:16:44.786602 kubelet[1756]: E0813 07:16:44.786323 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:44.793471 kubelet[1756]: I0813 07:16:44.793433 1756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:16:44.795449 systemd[1]: Created slice kubepods-besteffort-podb360ad18_2ee4_441e_a2e1_7305ffeda962.slice - libcontainer container kubepods-besteffort-podb360ad18_2ee4_441e_a2e1_7305ffeda962.slice. Aug 13 07:16:44.802727 kubelet[1756]: I0813 07:16:44.802690 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-var-run-calico\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.802812 kubelet[1756]: I0813 07:16:44.802727 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3734a05a-a0e7-4557-a1c5-2a9e77ffe351-xtables-lock\") pod \"kube-proxy-q48ts\" (UID: \"3734a05a-a0e7-4557-a1c5-2a9e77ffe351\") " pod="kube-system/kube-proxy-q48ts" Aug 13 07:16:44.802812 kubelet[1756]: I0813 07:16:44.802769 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3734a05a-a0e7-4557-a1c5-2a9e77ffe351-lib-modules\") pod \"kube-proxy-q48ts\" (UID: \"3734a05a-a0e7-4557-a1c5-2a9e77ffe351\") " pod="kube-system/kube-proxy-q48ts" Aug 13 07:16:44.802812 kubelet[1756]: I0813 07:16:44.802790 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f3ac557-6d90-4338-ba26-8876dbe35bc7-socket-dir\") pod \"csi-node-driver-f2lmn\" (UID: \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\") " pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:44.802812 kubelet[1756]: I0813 07:16:44.802809 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpm2p\" (UniqueName: \"kubernetes.io/projected/b360ad18-2ee4-441e-a2e1-7305ffeda962-kube-api-access-wpm2p\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.802942 kubelet[1756]: I0813 07:16:44.802829 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-cni-bin-dir\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.802942 kubelet[1756]: I0813 07:16:44.802848 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-cni-net-dir\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.802942 kubelet[1756]: I0813 07:16:44.802868 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b360ad18-2ee4-441e-a2e1-7305ffeda962-node-certs\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.802942 kubelet[1756]: I0813 07:16:44.802908 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-policysync\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803066 kubelet[1756]: I0813 07:16:44.802987 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-var-lib-calico\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803066 kubelet[1756]: I0813 07:16:44.803043 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f3ac557-6d90-4338-ba26-8876dbe35bc7-kubelet-dir\") pod \"csi-node-driver-f2lmn\" (UID: \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\") " pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:44.803066 kubelet[1756]: I0813 07:16:44.803061 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3734a05a-a0e7-4557-a1c5-2a9e77ffe351-kube-proxy\") pod \"kube-proxy-q48ts\" (UID: \"3734a05a-a0e7-4557-a1c5-2a9e77ffe351\") " pod="kube-system/kube-proxy-q48ts" Aug 13 07:16:44.803153 kubelet[1756]: I0813 07:16:44.803078 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7d7\" (UniqueName: \"kubernetes.io/projected/3734a05a-a0e7-4557-a1c5-2a9e77ffe351-kube-api-access-xk7d7\") pod \"kube-proxy-q48ts\" (UID: \"3734a05a-a0e7-4557-a1c5-2a9e77ffe351\") " pod="kube-system/kube-proxy-q48ts" Aug 13 07:16:44.803153 kubelet[1756]: I0813 07:16:44.803105 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f3ac557-6d90-4338-ba26-8876dbe35bc7-registration-dir\") pod \"csi-node-driver-f2lmn\" (UID: \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\") " pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:44.803153 kubelet[1756]: I0813 07:16:44.803123 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f3ac557-6d90-4338-ba26-8876dbe35bc7-varrun\") pod \"csi-node-driver-f2lmn\" (UID: \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\") " pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:44.803153 kubelet[1756]: I0813 07:16:44.803136 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskl9\" (UniqueName: \"kubernetes.io/projected/6f3ac557-6d90-4338-ba26-8876dbe35bc7-kube-api-access-gskl9\") pod \"csi-node-driver-f2lmn\" (UID: \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\") " pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:44.803153 kubelet[1756]: I0813 07:16:44.803149 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-xtables-lock\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803313 kubelet[1756]: I0813 07:16:44.803162 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-flexvol-driver-host\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803313 kubelet[1756]: I0813 07:16:44.803183 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-lib-modules\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803313 kubelet[1756]: I0813 07:16:44.803201 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b360ad18-2ee4-441e-a2e1-7305ffeda962-tigera-ca-bundle\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.803313 kubelet[1756]: I0813 07:16:44.803217 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b360ad18-2ee4-441e-a2e1-7305ffeda962-cni-log-dir\") pod \"calico-node-fcpl8\" (UID: \"b360ad18-2ee4-441e-a2e1-7305ffeda962\") " pod="calico-system/calico-node-fcpl8" Aug 13 07:16:44.821416 systemd[1]: Created slice kubepods-besteffort-pod3734a05a_a0e7_4557_a1c5_2a9e77ffe351.slice - libcontainer container kubepods-besteffort-pod3734a05a_a0e7_4557_a1c5_2a9e77ffe351.slice. Aug 13 07:16:44.905793 kubelet[1756]: E0813 07:16:44.905690 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:44.905793 kubelet[1756]: W0813 07:16:44.905713 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:44.905793 kubelet[1756]: E0813 07:16:44.905742 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:44.907696 kubelet[1756]: E0813 07:16:44.907672 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:44.907696 kubelet[1756]: W0813 07:16:44.907686 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:44.907696 kubelet[1756]: E0813 07:16:44.907697 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:44.912576 kubelet[1756]: E0813 07:16:44.912554 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:44.912576 kubelet[1756]: W0813 07:16:44.912572 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:44.912695 kubelet[1756]: E0813 07:16:44.912587 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:44.913425 kubelet[1756]: E0813 07:16:44.913363 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:44.913425 kubelet[1756]: W0813 07:16:44.913375 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:44.913425 kubelet[1756]: E0813 07:16:44.913385 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:44.914998 kubelet[1756]: E0813 07:16:44.914983 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:44.915042 kubelet[1756]: W0813 07:16:44.915010 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:44.915042 kubelet[1756]: E0813 07:16:44.915023 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:45.120985 containerd[1455]: time="2025-08-13T07:16:45.120866029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcpl8,Uid:b360ad18-2ee4-441e-a2e1-7305ffeda962,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:45.123948 kubelet[1756]: E0813 07:16:45.123916 1756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:45.124438 containerd[1455]: time="2025-08-13T07:16:45.124395106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q48ts,Uid:3734a05a-a0e7-4557-a1c5-2a9e77ffe351,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:45.775182 kubelet[1756]: E0813 07:16:45.775067 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:45.867003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001043987.mount: Deactivated successfully. Aug 13 07:16:45.875061 containerd[1455]: time="2025-08-13T07:16:45.875015529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:45.876012 containerd[1455]: time="2025-08-13T07:16:45.875978545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:45.876721 containerd[1455]: time="2025-08-13T07:16:45.876656295Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:16:45.877613 containerd[1455]: time="2025-08-13T07:16:45.877582011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:16:45.878470 containerd[1455]: time="2025-08-13T07:16:45.878436553Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:45.882080 containerd[1455]: time="2025-08-13T07:16:45.882029931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:45.882990 containerd[1455]: time="2025-08-13T07:16:45.882942111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 758.446417ms" Aug 13 07:16:45.885461 containerd[1455]: time="2025-08-13T07:16:45.885412563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.39459ms" Aug 13 07:16:46.030334 containerd[1455]: time="2025-08-13T07:16:46.030103168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:46.031625 containerd[1455]: time="2025-08-13T07:16:46.030228764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:46.031625 containerd[1455]: time="2025-08-13T07:16:46.031516198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:46.031820 containerd[1455]: time="2025-08-13T07:16:46.031777548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:46.034592 containerd[1455]: time="2025-08-13T07:16:46.033472195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:46.034592 containerd[1455]: time="2025-08-13T07:16:46.034306940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:46.034592 containerd[1455]: time="2025-08-13T07:16:46.034321638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:46.034592 containerd[1455]: time="2025-08-13T07:16:46.034417217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:46.202961 systemd[1]: Started cri-containerd-1e2e8716e73a94537dd92834c5ca24efe0031cd19aebce074c5ff12d3ddb339e.scope - libcontainer container 1e2e8716e73a94537dd92834c5ca24efe0031cd19aebce074c5ff12d3ddb339e. Aug 13 07:16:46.205674 systemd[1]: Started cri-containerd-696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386.scope - libcontainer container 696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386. Aug 13 07:16:46.238828 containerd[1455]: time="2025-08-13T07:16:46.238781520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q48ts,Uid:3734a05a-a0e7-4557-a1c5-2a9e77ffe351,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e2e8716e73a94537dd92834c5ca24efe0031cd19aebce074c5ff12d3ddb339e\"" Aug 13 07:16:46.239479 containerd[1455]: time="2025-08-13T07:16:46.239376586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcpl8,Uid:b360ad18-2ee4-441e-a2e1-7305ffeda962,Namespace:calico-system,Attempt:0,} returns sandbox id \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\"" Aug 13 07:16:46.240658 kubelet[1756]: E0813 07:16:46.240613 1756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:46.242204 containerd[1455]: time="2025-08-13T07:16:46.242176385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:16:46.776204 kubelet[1756]: E0813 07:16:46.776154 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:46.778723 kubelet[1756]: E0813 07:16:46.778652 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:47.754951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376208466.mount: Deactivated successfully. Aug 13 07:16:47.777040 kubelet[1756]: E0813 07:16:47.776987 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:48.272677 containerd[1455]: time="2025-08-13T07:16:48.272623704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.273338 containerd[1455]: time="2025-08-13T07:16:48.273277640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 07:16:48.274336 containerd[1455]: time="2025-08-13T07:16:48.274303283Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.276282 containerd[1455]: time="2025-08-13T07:16:48.276250424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.276886 containerd[1455]: time="2025-08-13T07:16:48.276850198Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.034636914s" Aug 13 07:16:48.276886 containerd[1455]: time="2025-08-13T07:16:48.276884092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:16:48.278320 containerd[1455]: time="2025-08-13T07:16:48.278256736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:16:48.282641 containerd[1455]: time="2025-08-13T07:16:48.282600690Z" level=info msg="CreateContainer within sandbox \"1e2e8716e73a94537dd92834c5ca24efe0031cd19aebce074c5ff12d3ddb339e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:16:48.298658 containerd[1455]: time="2025-08-13T07:16:48.298619896Z" level=info msg="CreateContainer within sandbox \"1e2e8716e73a94537dd92834c5ca24efe0031cd19aebce074c5ff12d3ddb339e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8dffb3a6248a2540f6e61f928022d23d279bbf0af393c5efec7ab9bee20ee7f\"" Aug 13 07:16:48.299512 containerd[1455]: time="2025-08-13T07:16:48.299473437Z" level=info msg="StartContainer for \"e8dffb3a6248a2540f6e61f928022d23d279bbf0af393c5efec7ab9bee20ee7f\"" Aug 13 07:16:48.442892 systemd[1]: Started cri-containerd-e8dffb3a6248a2540f6e61f928022d23d279bbf0af393c5efec7ab9bee20ee7f.scope - libcontainer container e8dffb3a6248a2540f6e61f928022d23d279bbf0af393c5efec7ab9bee20ee7f. Aug 13 07:16:48.498895 containerd[1455]: time="2025-08-13T07:16:48.498845980Z" level=info msg="StartContainer for \"e8dffb3a6248a2540f6e61f928022d23d279bbf0af393c5efec7ab9bee20ee7f\" returns successfully" Aug 13 07:16:48.777274 kubelet[1756]: E0813 07:16:48.777240 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:48.778633 kubelet[1756]: E0813 07:16:48.778595 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:48.788947 kubelet[1756]: E0813 07:16:48.788910 1756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:48.812248 kubelet[1756]: E0813 07:16:48.812232 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.812248 kubelet[1756]: W0813 07:16:48.812247 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.812315 kubelet[1756]: E0813 07:16:48.812263 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.812468 kubelet[1756]: E0813 07:16:48.812456 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.812468 kubelet[1756]: W0813 07:16:48.812466 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.812527 kubelet[1756]: E0813 07:16:48.812474 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.812689 kubelet[1756]: E0813 07:16:48.812668 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.812689 kubelet[1756]: W0813 07:16:48.812676 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.812689 kubelet[1756]: E0813 07:16:48.812684 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.812934 kubelet[1756]: E0813 07:16:48.812920 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.812934 kubelet[1756]: W0813 07:16:48.812929 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813000 kubelet[1756]: E0813 07:16:48.812938 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.813127 kubelet[1756]: E0813 07:16:48.813114 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.813127 kubelet[1756]: W0813 07:16:48.813123 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813180 kubelet[1756]: E0813 07:16:48.813130 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.813287 kubelet[1756]: E0813 07:16:48.813276 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.813287 kubelet[1756]: W0813 07:16:48.813284 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813338 kubelet[1756]: E0813 07:16:48.813291 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.813453 kubelet[1756]: E0813 07:16:48.813441 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.813453 kubelet[1756]: W0813 07:16:48.813451 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813509 kubelet[1756]: E0813 07:16:48.813460 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.813658 kubelet[1756]: E0813 07:16:48.813645 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.813658 kubelet[1756]: W0813 07:16:48.813655 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813721 kubelet[1756]: E0813 07:16:48.813662 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.813845 kubelet[1756]: E0813 07:16:48.813834 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.813845 kubelet[1756]: W0813 07:16:48.813842 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.813898 kubelet[1756]: E0813 07:16:48.813849 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.814051 kubelet[1756]: E0813 07:16:48.814033 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.814051 kubelet[1756]: W0813 07:16:48.814046 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.814051 kubelet[1756]: E0813 07:16:48.814059 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.814291 kubelet[1756]: E0813 07:16:48.814275 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.814291 kubelet[1756]: W0813 07:16:48.814286 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.814370 kubelet[1756]: E0813 07:16:48.814297 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.814497 kubelet[1756]: E0813 07:16:48.814481 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.814497 kubelet[1756]: W0813 07:16:48.814492 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.814568 kubelet[1756]: E0813 07:16:48.814502 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.814700 kubelet[1756]: E0813 07:16:48.814684 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.814700 kubelet[1756]: W0813 07:16:48.814695 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.814782 kubelet[1756]: E0813 07:16:48.814704 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.814914 kubelet[1756]: E0813 07:16:48.814899 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.814914 kubelet[1756]: W0813 07:16:48.814909 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.814976 kubelet[1756]: E0813 07:16:48.814920 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.815126 kubelet[1756]: E0813 07:16:48.815110 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.815126 kubelet[1756]: W0813 07:16:48.815121 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.815195 kubelet[1756]: E0813 07:16:48.815131 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.815323 kubelet[1756]: E0813 07:16:48.815308 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.815323 kubelet[1756]: W0813 07:16:48.815318 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.815405 kubelet[1756]: E0813 07:16:48.815329 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.815519 kubelet[1756]: E0813 07:16:48.815503 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.815519 kubelet[1756]: W0813 07:16:48.815515 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.815579 kubelet[1756]: E0813 07:16:48.815525 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.815714 kubelet[1756]: E0813 07:16:48.815699 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.815714 kubelet[1756]: W0813 07:16:48.815709 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.815791 kubelet[1756]: E0813 07:16:48.815719 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.815930 kubelet[1756]: E0813 07:16:48.815914 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.815930 kubelet[1756]: W0813 07:16:48.815925 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.815997 kubelet[1756]: E0813 07:16:48.815935 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.816134 kubelet[1756]: E0813 07:16:48.816118 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.816134 kubelet[1756]: W0813 07:16:48.816129 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.816202 kubelet[1756]: E0813 07:16:48.816138 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.913355 kubelet[1756]: E0813 07:16:48.913322 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.913355 kubelet[1756]: W0813 07:16:48.913346 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.913514 kubelet[1756]: E0813 07:16:48.913375 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.913640 kubelet[1756]: E0813 07:16:48.913626 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.913640 kubelet[1756]: W0813 07:16:48.913637 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.913764 kubelet[1756]: E0813 07:16:48.913647 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.913945 kubelet[1756]: E0813 07:16:48.913930 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.913945 kubelet[1756]: W0813 07:16:48.913941 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.914032 kubelet[1756]: E0813 07:16:48.913951 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.914304 kubelet[1756]: E0813 07:16:48.914287 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.914304 kubelet[1756]: W0813 07:16:48.914302 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.914371 kubelet[1756]: E0813 07:16:48.914314 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.914607 kubelet[1756]: E0813 07:16:48.914593 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.914637 kubelet[1756]: W0813 07:16:48.914605 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.914637 kubelet[1756]: E0813 07:16:48.914618 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.914884 kubelet[1756]: E0813 07:16:48.914871 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.914884 kubelet[1756]: W0813 07:16:48.914883 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.914930 kubelet[1756]: E0813 07:16:48.914893 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.915197 kubelet[1756]: E0813 07:16:48.915115 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.915197 kubelet[1756]: W0813 07:16:48.915129 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.915197 kubelet[1756]: E0813 07:16:48.915139 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.915383 kubelet[1756]: E0813 07:16:48.915366 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.915383 kubelet[1756]: W0813 07:16:48.915377 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.915465 kubelet[1756]: E0813 07:16:48.915387 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.915596 kubelet[1756]: E0813 07:16:48.915583 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.915596 kubelet[1756]: W0813 07:16:48.915593 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.915656 kubelet[1756]: E0813 07:16:48.915602 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.915838 kubelet[1756]: E0813 07:16:48.915825 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.915838 kubelet[1756]: W0813 07:16:48.915834 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.915916 kubelet[1756]: E0813 07:16:48.915841 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.916179 kubelet[1756]: E0813 07:16:48.916159 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.916179 kubelet[1756]: W0813 07:16:48.916177 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.916258 kubelet[1756]: E0813 07:16:48.916192 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:48.916484 kubelet[1756]: E0813 07:16:48.916469 1756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:48.916484 kubelet[1756]: W0813 07:16:48.916481 1756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:48.916570 kubelet[1756]: E0813 07:16:48.916491 1756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:49.593478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841490031.mount: Deactivated successfully. Aug 13 07:16:49.655417 containerd[1455]: time="2025-08-13T07:16:49.655356477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:49.656281 containerd[1455]: time="2025-08-13T07:16:49.656249301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 07:16:49.657570 containerd[1455]: time="2025-08-13T07:16:49.657529652Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:49.659592 containerd[1455]: time="2025-08-13T07:16:49.659562624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:49.660255 containerd[1455]: time="2025-08-13T07:16:49.660203185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.38191513s" Aug 13 07:16:49.660305 containerd[1455]: time="2025-08-13T07:16:49.660267465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:16:49.664767 containerd[1455]: time="2025-08-13T07:16:49.664713952Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:16:49.681370 containerd[1455]: time="2025-08-13T07:16:49.681320850Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad\"" Aug 13 07:16:49.682080 containerd[1455]: time="2025-08-13T07:16:49.682023117Z" level=info msg="StartContainer for \"1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad\"" Aug 13 07:16:49.711974 systemd[1]: Started cri-containerd-1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad.scope - libcontainer container 1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad. Aug 13 07:16:49.742172 containerd[1455]: time="2025-08-13T07:16:49.742123575Z" level=info msg="StartContainer for \"1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad\" returns successfully" Aug 13 07:16:49.752988 systemd[1]: cri-containerd-1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad.scope: Deactivated successfully. Aug 13 07:16:49.777452 kubelet[1756]: E0813 07:16:49.777391 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:49.791173 kubelet[1756]: E0813 07:16:49.791132 1756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:49.827316 kubelet[1756]: I0813 07:16:49.827230 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q48ts" podStartSLOduration=4.790898499 podStartE2EDuration="6.827204913s" podCreationTimestamp="2025-08-13 07:16:43 +0000 UTC" firstStartedPulling="2025-08-13 07:16:46.241713678 +0000 UTC m=+4.052404458" lastFinishedPulling="2025-08-13 07:16:48.278020092 +0000 UTC m=+6.088710872" observedRunningTime="2025-08-13 07:16:48.872831353 +0000 UTC m=+6.683522143" watchObservedRunningTime="2025-08-13 07:16:49.827204913 +0000 UTC m=+7.637895693" Aug 13 07:16:50.311793 containerd[1455]: time="2025-08-13T07:16:50.311681006Z" level=info msg="shim disconnected" id=1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad namespace=k8s.io Aug 13 07:16:50.311793 containerd[1455]: time="2025-08-13T07:16:50.311800920Z" level=warning msg="cleaning up after shim disconnected" id=1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad namespace=k8s.io Aug 13 07:16:50.311985 containerd[1455]: time="2025-08-13T07:16:50.311813264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:16:50.573033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1680485b7b52515bc1456c42f3521b1401d93eb33eec0ebadfc71727f843f0ad-rootfs.mount: Deactivated successfully. Aug 13 07:16:50.777621 kubelet[1756]: E0813 07:16:50.777585 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:50.778938 kubelet[1756]: E0813 07:16:50.778902 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:50.793665 containerd[1455]: time="2025-08-13T07:16:50.793633306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:16:51.778010 kubelet[1756]: E0813 07:16:51.777912 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:52.778978 kubelet[1756]: E0813 07:16:52.778885 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:52.779840 kubelet[1756]: E0813 07:16:52.779760 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:53.779656 kubelet[1756]: E0813 07:16:53.779590 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:54.778252 kubelet[1756]: E0813 07:16:54.778193 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:54.779807 kubelet[1756]: E0813 07:16:54.779764 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:54.950231 containerd[1455]: time="2025-08-13T07:16:54.950137450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.951191 containerd[1455]: time="2025-08-13T07:16:54.951126124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:16:54.953287 containerd[1455]: time="2025-08-13T07:16:54.953256508Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.956338 containerd[1455]: time="2025-08-13T07:16:54.956293332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.956983 containerd[1455]: time="2025-08-13T07:16:54.956930958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.163260392s" Aug 13 07:16:54.957027 containerd[1455]: time="2025-08-13T07:16:54.956992804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:16:54.962904 containerd[1455]: time="2025-08-13T07:16:54.962868411Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:16:54.987615 containerd[1455]: time="2025-08-13T07:16:54.987536323Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9\"" Aug 13 07:16:54.988173 containerd[1455]: time="2025-08-13T07:16:54.988099048Z" level=info msg="StartContainer for \"f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9\"" Aug 13 07:16:55.036973 systemd[1]: run-containerd-runc-k8s.io-f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9-runc.xSuirv.mount: Deactivated successfully. Aug 13 07:16:55.051025 systemd[1]: Started cri-containerd-f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9.scope - libcontainer container f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9. Aug 13 07:16:55.094199 containerd[1455]: time="2025-08-13T07:16:55.094146267Z" level=info msg="StartContainer for \"f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9\" returns successfully" Aug 13 07:16:55.779890 kubelet[1756]: E0813 07:16:55.779836 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:56.778398 kubelet[1756]: E0813 07:16:56.778308 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:56.780281 kubelet[1756]: E0813 07:16:56.780167 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:57.576149 systemd[1]: cri-containerd-f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9.scope: Deactivated successfully. Aug 13 07:16:57.576636 systemd[1]: cri-containerd-f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9.scope: Consumed 1.703s CPU time. Aug 13 07:16:57.602302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9-rootfs.mount: Deactivated successfully. Aug 13 07:16:57.636276 kubelet[1756]: I0813 07:16:57.636215 1756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:16:57.780400 kubelet[1756]: E0813 07:16:57.780353 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:58.240411 containerd[1455]: time="2025-08-13T07:16:58.240319851Z" level=info msg="shim disconnected" id=f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9 namespace=k8s.io Aug 13 07:16:58.240411 containerd[1455]: time="2025-08-13T07:16:58.240396796Z" level=warning msg="cleaning up after shim disconnected" id=f21bd63ce9077efd82bce059233a568ef921f759c5395c5e8c2a804e959eeff9 namespace=k8s.io Aug 13 07:16:58.240411 containerd[1455]: time="2025-08-13T07:16:58.240409840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:16:58.781196 kubelet[1756]: E0813 07:16:58.781159 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:58.784520 systemd[1]: Created slice kubepods-besteffort-pod6f3ac557_6d90_4338_ba26_8876dbe35bc7.slice - libcontainer container kubepods-besteffort-pod6f3ac557_6d90_4338_ba26_8876dbe35bc7.slice. Aug 13 07:16:58.786995 containerd[1455]: time="2025-08-13T07:16:58.786948289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2lmn,Uid:6f3ac557-6d90-4338-ba26-8876dbe35bc7,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:58.810152 containerd[1455]: time="2025-08-13T07:16:58.810102995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:16:58.860265 containerd[1455]: time="2025-08-13T07:16:58.860183205Z" level=error msg="Failed to destroy network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:58.860663 containerd[1455]: time="2025-08-13T07:16:58.860626606Z" level=error msg="encountered an error cleaning up failed sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:58.860717 containerd[1455]: time="2025-08-13T07:16:58.860689334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2lmn,Uid:6f3ac557-6d90-4338-ba26-8876dbe35bc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:58.861022 kubelet[1756]: E0813 07:16:58.860971 1756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:16:58.861092 kubelet[1756]: E0813 07:16:58.861069 1756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:58.861133 kubelet[1756]: E0813 07:16:58.861110 1756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f2lmn" Aug 13 07:16:58.861227 kubelet[1756]: E0813 07:16:58.861193 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f2lmn_calico-system(6f3ac557-6d90-4338-ba26-8876dbe35bc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f2lmn_calico-system(6f3ac557-6d90-4338-ba26-8876dbe35bc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:16:58.861792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9-shm.mount: Deactivated successfully. Aug 13 07:16:59.781317 kubelet[1756]: E0813 07:16:59.781280 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:16:59.811776 kubelet[1756]: I0813 07:16:59.811711 1756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:16:59.812470 containerd[1455]: time="2025-08-13T07:16:59.812412865Z" level=info msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" Aug 13 07:16:59.812947 containerd[1455]: time="2025-08-13T07:16:59.812621897Z" level=info msg="Ensure that sandbox 930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9 in task-service has been cleanup successfully" Aug 13 07:17:00.019143 containerd[1455]: time="2025-08-13T07:17:00.019052454Z" level=error msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" failed" error="failed to destroy network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:00.019415 kubelet[1756]: E0813 07:17:00.019368 1756 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:00.019495 kubelet[1756]: E0813 07:17:00.019443 1756 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9"} Aug 13 07:17:00.019538 kubelet[1756]: E0813 07:17:00.019516 1756 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:00.019603 kubelet[1756]: E0813 07:17:00.019550 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f3ac557-6d90-4338-ba26-8876dbe35bc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f2lmn" podUID="6f3ac557-6d90-4338-ba26-8876dbe35bc7" Aug 13 07:17:00.642580 systemd[1]: Created slice kubepods-besteffort-pod16528fd6_de3d_4f21_ac9c_1a1808f49a2d.slice - libcontainer container kubepods-besteffort-pod16528fd6_de3d_4f21_ac9c_1a1808f49a2d.slice. Aug 13 07:17:00.698683 kubelet[1756]: I0813 07:17:00.698621 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ktmd\" (UniqueName: \"kubernetes.io/projected/16528fd6-de3d-4f21-ac9c-1a1808f49a2d-kube-api-access-5ktmd\") pod \"nginx-deployment-7fcdb87857-rw5mx\" (UID: \"16528fd6-de3d-4f21-ac9c-1a1808f49a2d\") " pod="default/nginx-deployment-7fcdb87857-rw5mx" Aug 13 07:17:00.782425 kubelet[1756]: E0813 07:17:00.782364 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:00.949209 containerd[1455]: time="2025-08-13T07:17:00.949041193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rw5mx,Uid:16528fd6-de3d-4f21-ac9c-1a1808f49a2d,Namespace:default,Attempt:0,}" Aug 13 07:17:01.125014 containerd[1455]: time="2025-08-13T07:17:01.124934393Z" level=error msg="Failed to destroy network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:01.127082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7-shm.mount: Deactivated successfully. Aug 13 07:17:01.130548 containerd[1455]: time="2025-08-13T07:17:01.130497564Z" level=error msg="encountered an error cleaning up failed sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:01.130625 containerd[1455]: time="2025-08-13T07:17:01.130558889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rw5mx,Uid:16528fd6-de3d-4f21-ac9c-1a1808f49a2d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:01.130914 kubelet[1756]: E0813 07:17:01.130865 1756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:01.131006 kubelet[1756]: E0813 07:17:01.130946 1756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rw5mx" Aug 13 07:17:01.131006 kubelet[1756]: E0813 07:17:01.130974 1756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rw5mx" Aug 13 07:17:01.131085 kubelet[1756]: E0813 07:17:01.131047 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rw5mx_default(16528fd6-de3d-4f21-ac9c-1a1808f49a2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rw5mx_default(16528fd6-de3d-4f21-ac9c-1a1808f49a2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rw5mx" podUID="16528fd6-de3d-4f21-ac9c-1a1808f49a2d" Aug 13 07:17:01.783257 kubelet[1756]: E0813 07:17:01.783201 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:01.817144 kubelet[1756]: I0813 07:17:01.817089 1756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:01.817827 containerd[1455]: time="2025-08-13T07:17:01.817779978Z" level=info msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" Aug 13 07:17:01.818076 containerd[1455]: time="2025-08-13T07:17:01.818002886Z" level=info msg="Ensure that sandbox 77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7 in task-service has been cleanup successfully" Aug 13 07:17:01.851432 containerd[1455]: time="2025-08-13T07:17:01.851376632Z" level=error msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" failed" error="failed to destroy network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:01.851715 kubelet[1756]: E0813 07:17:01.851645 1756 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:01.851799 kubelet[1756]: E0813 07:17:01.851740 1756 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7"} Aug 13 07:17:01.851841 kubelet[1756]: E0813 07:17:01.851803 1756 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16528fd6-de3d-4f21-ac9c-1a1808f49a2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:01.851906 kubelet[1756]: E0813 07:17:01.851832 1756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16528fd6-de3d-4f21-ac9c-1a1808f49a2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rw5mx" podUID="16528fd6-de3d-4f21-ac9c-1a1808f49a2d" Aug 13 07:17:02.772272 kubelet[1756]: E0813 07:17:02.772059 1756 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:02.783596 kubelet[1756]: E0813 07:17:02.783557 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:03.784302 kubelet[1756]: E0813 07:17:03.784250 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:04.784860 kubelet[1756]: E0813 07:17:04.784793 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:05.543441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740701659.mount: Deactivated successfully. Aug 13 07:17:05.785804 kubelet[1756]: E0813 07:17:05.785762 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:06.481556 containerd[1455]: time="2025-08-13T07:17:06.481461780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.483380 containerd[1455]: time="2025-08-13T07:17:06.483330143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:17:06.485441 containerd[1455]: time="2025-08-13T07:17:06.485376310Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.488159 containerd[1455]: time="2025-08-13T07:17:06.488111549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.488676 containerd[1455]: time="2025-08-13T07:17:06.488644889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.678493133s" Aug 13 07:17:06.488710 containerd[1455]: time="2025-08-13T07:17:06.488674995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:17:06.499011 containerd[1455]: time="2025-08-13T07:17:06.498966191Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:17:06.515183 containerd[1455]: time="2025-08-13T07:17:06.515125830Z" level=info msg="CreateContainer within sandbox \"696123e2a8ae735c25110770f74bfc3e8f94390b4adb69103001cde7788dd386\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85700110fb3fbb59b12445a5f6d714a9c2c1ec6119636b992fdc5e61539bed95\"" Aug 13 07:17:06.515827 containerd[1455]: time="2025-08-13T07:17:06.515790066Z" level=info msg="StartContainer for \"85700110fb3fbb59b12445a5f6d714a9c2c1ec6119636b992fdc5e61539bed95\"" Aug 13 07:17:06.559905 systemd[1]: Started cri-containerd-85700110fb3fbb59b12445a5f6d714a9c2c1ec6119636b992fdc5e61539bed95.scope - libcontainer container 85700110fb3fbb59b12445a5f6d714a9c2c1ec6119636b992fdc5e61539bed95. Aug 13 07:17:06.684342 containerd[1455]: time="2025-08-13T07:17:06.684267521Z" level=info msg="StartContainer for \"85700110fb3fbb59b12445a5f6d714a9c2c1ec6119636b992fdc5e61539bed95\" returns successfully" Aug 13 07:17:06.740772 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:17:06.740928 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:17:06.786862 kubelet[1756]: E0813 07:17:06.786805 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:07.014428 kubelet[1756]: I0813 07:17:07.014341 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fcpl8" podStartSLOduration=3.766817749 podStartE2EDuration="24.01430344s" podCreationTimestamp="2025-08-13 07:16:43 +0000 UTC" firstStartedPulling="2025-08-13 07:16:46.241876092 +0000 UTC m=+4.052566872" lastFinishedPulling="2025-08-13 07:17:06.489361783 +0000 UTC m=+24.300052563" observedRunningTime="2025-08-13 07:17:07.01426183 +0000 UTC m=+24.824952630" watchObservedRunningTime="2025-08-13 07:17:07.01430344 +0000 UTC m=+24.824994220" Aug 13 07:17:07.787333 kubelet[1756]: E0813 07:17:07.787292 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:08.505809 kernel: bpftool[2612]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:17:08.788052 kubelet[1756]: E0813 07:17:08.788002 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:08.822372 systemd-networkd[1399]: vxlan.calico: Link UP Aug 13 07:17:08.822388 systemd-networkd[1399]: vxlan.calico: Gained carrier Aug 13 07:17:09.788305 kubelet[1756]: E0813 07:17:09.788244 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:10.592064 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Aug 13 07:17:10.789289 kubelet[1756]: E0813 07:17:10.789194 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:11.789466 kubelet[1756]: E0813 07:17:11.789389 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:12.779741 containerd[1455]: time="2025-08-13T07:17:12.779682159Z" level=info msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" Aug 13 07:17:12.790134 kubelet[1756]: E0813 07:17:12.790090 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.204 [INFO][2697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.204 [INFO][2697] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" iface="eth0" netns="/var/run/netns/cni-e5c148c5-6112-ada3-b349-4025d1948991" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.204 [INFO][2697] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" iface="eth0" netns="/var/run/netns/cni-e5c148c5-6112-ada3-b349-4025d1948991" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.205 [INFO][2697] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" iface="eth0" netns="/var/run/netns/cni-e5c148c5-6112-ada3-b349-4025d1948991" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.205 [INFO][2697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.205 [INFO][2697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.235 [INFO][2705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.235 [INFO][2705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.235 [INFO][2705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.634 [WARNING][2705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.634 [INFO][2705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.636 [INFO][2705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:13.643876 containerd[1455]: 2025-08-13 07:17:13.641 [INFO][2697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:13.644256 containerd[1455]: time="2025-08-13T07:17:13.644067676Z" level=info msg="TearDown network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" successfully" Aug 13 07:17:13.644256 containerd[1455]: time="2025-08-13T07:17:13.644096932Z" level=info msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" returns successfully" Aug 13 07:17:13.644989 containerd[1455]: time="2025-08-13T07:17:13.644945834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2lmn,Uid:6f3ac557-6d90-4338-ba26-8876dbe35bc7,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:13.645873 systemd[1]: run-netns-cni\x2de5c148c5\x2d6112\x2dada3\x2db349\x2d4025d1948991.mount: Deactivated successfully. Aug 13 07:17:13.790997 kubelet[1756]: E0813 07:17:13.790956 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:13.834059 systemd-networkd[1399]: cali0dd1eb88340: Link UP Aug 13 07:17:13.835335 systemd-networkd[1399]: cali0dd1eb88340: Gained carrier Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.770 [INFO][2713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.146-k8s-csi--node--driver--f2lmn-eth0 csi-node-driver- calico-system 6f3ac557-6d90-4338-ba26-8876dbe35bc7 1376 0 2025-08-13 07:16:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.146 csi-node-driver-f2lmn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0dd1eb88340 [] [] }} ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.771 [INFO][2713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.797 [INFO][2728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" HandleID="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.797 [INFO][2728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" HandleID="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.146", "pod":"csi-node-driver-f2lmn", "timestamp":"2025-08-13 07:17:13.797801421 +0000 UTC"}, Hostname:"10.0.0.146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.798 [INFO][2728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.798 [INFO][2728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.798 [INFO][2728] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.146' Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.805 [INFO][2728] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.811 [INFO][2728] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.814 [INFO][2728] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.816 [INFO][2728] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.818 [INFO][2728] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.818 [INFO][2728] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.820 [INFO][2728] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.823 [INFO][2728] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.828 [INFO][2728] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.828 [INFO][2728] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" host="10.0.0.146" Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.828 [INFO][2728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:13.848672 containerd[1455]: 2025-08-13 07:17:13.828 [INFO][2728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" HandleID="k8s-pod-network.3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.831 [INFO][2713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-csi--node--driver--f2lmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f3ac557-6d90-4338-ba26-8876dbe35bc7", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"", Pod:"csi-node-driver-f2lmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dd1eb88340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.831 [INFO][2713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.129/32] ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.831 [INFO][2713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dd1eb88340 ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.834 [INFO][2713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.835 [INFO][2713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-csi--node--driver--f2lmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f3ac557-6d90-4338-ba26-8876dbe35bc7", ResourceVersion:"1376", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d", Pod:"csi-node-driver-f2lmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dd1eb88340", MAC:"ba:2b:f2:43:40:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:13.849725 containerd[1455]: 2025-08-13 07:17:13.844 [INFO][2713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d" Namespace="calico-system" Pod="csi-node-driver-f2lmn" WorkloadEndpoint="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:13.867302 containerd[1455]: time="2025-08-13T07:17:13.867152734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:13.867302 containerd[1455]: time="2025-08-13T07:17:13.867236695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:13.867302 containerd[1455]: time="2025-08-13T07:17:13.867248498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:13.867980 containerd[1455]: time="2025-08-13T07:17:13.867343098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:13.886910 systemd[1]: Started cri-containerd-3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d.scope - libcontainer container 3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d. Aug 13 07:17:13.897975 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:13.913886 containerd[1455]: time="2025-08-13T07:17:13.913834896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f2lmn,Uid:6f3ac557-6d90-4338-ba26-8876dbe35bc7,Namespace:calico-system,Attempt:1,} returns sandbox id \"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d\"" Aug 13 07:17:13.915674 containerd[1455]: time="2025-08-13T07:17:13.915624854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:17:14.791394 kubelet[1756]: E0813 07:17:14.791332 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:15.456059 systemd-networkd[1399]: cali0dd1eb88340: Gained IPv6LL Aug 13 07:17:15.489216 containerd[1455]: time="2025-08-13T07:17:15.489153464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:15.489857 containerd[1455]: time="2025-08-13T07:17:15.489808222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:17:15.490946 containerd[1455]: time="2025-08-13T07:17:15.490917706Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:15.493160 containerd[1455]: time="2025-08-13T07:17:15.493127487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:15.493899 containerd[1455]: time="2025-08-13T07:17:15.493869231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.578200833s" Aug 13 07:17:15.493947 containerd[1455]: time="2025-08-13T07:17:15.493904859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:17:15.498422 containerd[1455]: time="2025-08-13T07:17:15.498388112Z" level=info msg="CreateContainer within sandbox \"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:17:15.514651 containerd[1455]: time="2025-08-13T07:17:15.514595916Z" level=info msg="CreateContainer within sandbox \"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"980bf4c33ef20e3a2bf8f083b51f1fc9c1ef0e9a706a4f4921d53d971a87860b\"" Aug 13 07:17:15.515238 containerd[1455]: time="2025-08-13T07:17:15.515201190Z" level=info msg="StartContainer for \"980bf4c33ef20e3a2bf8f083b51f1fc9c1ef0e9a706a4f4921d53d971a87860b\"" Aug 13 07:17:15.546895 systemd[1]: Started cri-containerd-980bf4c33ef20e3a2bf8f083b51f1fc9c1ef0e9a706a4f4921d53d971a87860b.scope - libcontainer container 980bf4c33ef20e3a2bf8f083b51f1fc9c1ef0e9a706a4f4921d53d971a87860b. Aug 13 07:17:15.792308 kubelet[1756]: E0813 07:17:15.792263 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:15.803845 containerd[1455]: time="2025-08-13T07:17:15.803787654Z" level=info msg="StartContainer for \"980bf4c33ef20e3a2bf8f083b51f1fc9c1ef0e9a706a4f4921d53d971a87860b\" returns successfully" Aug 13 07:17:15.804817 containerd[1455]: time="2025-08-13T07:17:15.804783092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:17:16.779525 containerd[1455]: time="2025-08-13T07:17:16.779470400Z" level=info msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" Aug 13 07:17:16.792609 kubelet[1756]: E0813 07:17:16.792566 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" iface="eth0" netns="/var/run/netns/cni-cb640caa-c80b-3e2f-7aac-cf3f0d31b97f" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" iface="eth0" netns="/var/run/netns/cni-cb640caa-c80b-3e2f-7aac-cf3f0d31b97f" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" iface="eth0" netns="/var/run/netns/cni-cb640caa-c80b-3e2f-7aac-cf3f0d31b97f" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.822 [INFO][2850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.841 [INFO][2859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.841 [INFO][2859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.841 [INFO][2859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.963 [WARNING][2859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.963 [INFO][2859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.965 [INFO][2859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:16.969860 containerd[1455]: 2025-08-13 07:17:16.967 [INFO][2850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:16.970387 containerd[1455]: time="2025-08-13T07:17:16.970110229Z" level=info msg="TearDown network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" successfully" Aug 13 07:17:16.970387 containerd[1455]: time="2025-08-13T07:17:16.970141268Z" level=info msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" returns successfully" Aug 13 07:17:16.971124 containerd[1455]: time="2025-08-13T07:17:16.970897257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rw5mx,Uid:16528fd6-de3d-4f21-ac9c-1a1808f49a2d,Namespace:default,Attempt:1,}" Aug 13 07:17:16.972115 systemd[1]: run-netns-cni\x2dcb640caa\x2dc80b\x2d3e2f\x2d7aac\x2dcf3f0d31b97f.mount: Deactivated successfully. Aug 13 07:17:17.305694 systemd-networkd[1399]: cali8e04822713b: Link UP Aug 13 07:17:17.305923 systemd-networkd[1399]: cali8e04822713b: Gained carrier Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.244 [INFO][2868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0 nginx-deployment-7fcdb87857- default 16528fd6-de3d-4f21-ac9c-1a1808f49a2d 1396 0 2025-08-13 07:17:00 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.146 nginx-deployment-7fcdb87857-rw5mx eth0 default [] [] [kns.default ksa.default.default] cali8e04822713b [] [] }} ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.244 [INFO][2868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.269 [INFO][2882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" HandleID="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.269 [INFO][2882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" HandleID="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001359c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.146", "pod":"nginx-deployment-7fcdb87857-rw5mx", "timestamp":"2025-08-13 07:17:17.269606323 +0000 UTC"}, Hostname:"10.0.0.146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.269 [INFO][2882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.269 [INFO][2882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.269 [INFO][2882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.146' Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.275 [INFO][2882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.280 [INFO][2882] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.283 [INFO][2882] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.285 [INFO][2882] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.287 [INFO][2882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.287 [INFO][2882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.288 [INFO][2882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.293 [INFO][2882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.300 [INFO][2882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.300 [INFO][2882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" host="10.0.0.146" Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.300 [INFO][2882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:17.315235 containerd[1455]: 2025-08-13 07:17:17.300 [INFO][2882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" HandleID="k8s-pod-network.f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.303 [INFO][2868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"16528fd6-de3d-4f21-ac9c-1a1808f49a2d", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-rw5mx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8e04822713b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.303 [INFO][2868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.130/32] ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.303 [INFO][2868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e04822713b ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.305 [INFO][2868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.306 [INFO][2868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"16528fd6-de3d-4f21-ac9c-1a1808f49a2d", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b", Pod:"nginx-deployment-7fcdb87857-rw5mx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8e04822713b", MAC:"ce:36:86:ee:ad:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:17.316018 containerd[1455]: 2025-08-13 07:17:17.312 [INFO][2868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b" Namespace="default" Pod="nginx-deployment-7fcdb87857-rw5mx" WorkloadEndpoint="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:17.332808 containerd[1455]: time="2025-08-13T07:17:17.332699877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:17.332808 containerd[1455]: time="2025-08-13T07:17:17.332773307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:17.332808 containerd[1455]: time="2025-08-13T07:17:17.332789748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:17.332994 containerd[1455]: time="2025-08-13T07:17:17.332934403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:17.357922 systemd[1]: Started cri-containerd-f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b.scope - libcontainer container f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b. Aug 13 07:17:17.369569 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:17.398198 containerd[1455]: time="2025-08-13T07:17:17.398149292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rw5mx,Uid:16528fd6-de3d-4f21-ac9c-1a1808f49a2d,Namespace:default,Attempt:1,} returns sandbox id \"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b\"" Aug 13 07:17:17.597259 containerd[1455]: time="2025-08-13T07:17:17.597123611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:17.597948 containerd[1455]: time="2025-08-13T07:17:17.597859981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:17:17.599098 containerd[1455]: time="2025-08-13T07:17:17.599065995Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:17.601154 containerd[1455]: time="2025-08-13T07:17:17.601125492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:17.601759 containerd[1455]: time="2025-08-13T07:17:17.601711767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.796887107s" Aug 13 07:17:17.601802 containerd[1455]: time="2025-08-13T07:17:17.601763746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:17:17.603012 containerd[1455]: time="2025-08-13T07:17:17.602968527Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 07:17:17.606417 containerd[1455]: time="2025-08-13T07:17:17.606380867Z" level=info msg="CreateContainer within sandbox \"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:17:17.620827 containerd[1455]: time="2025-08-13T07:17:17.620784941Z" level=info msg="CreateContainer within sandbox \"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bb54b0b47e716c2bc7007015431bbc08ff62eacf7f62fa32391f872ef940d797\"" Aug 13 07:17:17.621282 containerd[1455]: time="2025-08-13T07:17:17.621235017Z" level=info msg="StartContainer for \"bb54b0b47e716c2bc7007015431bbc08ff62eacf7f62fa32391f872ef940d797\"" Aug 13 07:17:17.654882 systemd[1]: Started cri-containerd-bb54b0b47e716c2bc7007015431bbc08ff62eacf7f62fa32391f872ef940d797.scope - libcontainer container bb54b0b47e716c2bc7007015431bbc08ff62eacf7f62fa32391f872ef940d797. Aug 13 07:17:17.716167 containerd[1455]: time="2025-08-13T07:17:17.716116156Z" level=info msg="StartContainer for \"bb54b0b47e716c2bc7007015431bbc08ff62eacf7f62fa32391f872ef940d797\" returns successfully" Aug 13 07:17:17.793172 kubelet[1756]: E0813 07:17:17.793117 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:17.973313 kubelet[1756]: I0813 07:17:17.973170 1756 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:17:17.973313 kubelet[1756]: I0813 07:17:17.973227 1756 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:17:18.093272 kubelet[1756]: I0813 07:17:18.093215 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f2lmn" podStartSLOduration=30.405875562 podStartE2EDuration="34.093197717s" podCreationTimestamp="2025-08-13 07:16:44 +0000 UTC" firstStartedPulling="2025-08-13 07:17:13.915314151 +0000 UTC m=+31.726004931" lastFinishedPulling="2025-08-13 07:17:17.602636306 +0000 UTC m=+35.413327086" observedRunningTime="2025-08-13 07:17:18.093130458 +0000 UTC m=+35.903821249" watchObservedRunningTime="2025-08-13 07:17:18.093197717 +0000 UTC m=+35.903888497" Aug 13 07:17:18.794090 kubelet[1756]: E0813 07:17:18.794006 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:19.103940 systemd-networkd[1399]: cali8e04822713b: Gained IPv6LL Aug 13 07:17:19.777542 update_engine[1441]: I20250813 07:17:19.777436 1441 update_attempter.cc:509] Updating boot flags... Aug 13 07:17:19.795173 kubelet[1756]: E0813 07:17:19.795102 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:19.803962 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2996) Aug 13 07:17:19.842782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2890) Aug 13 07:17:19.866830 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2890) Aug 13 07:17:20.796282 kubelet[1756]: E0813 07:17:20.796228 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:21.040122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173032043.mount: Deactivated successfully. Aug 13 07:17:21.797294 kubelet[1756]: E0813 07:17:21.797219 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:22.772171 kubelet[1756]: E0813 07:17:22.772115 1756 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:22.797935 kubelet[1756]: E0813 07:17:22.797900 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:23.094178 containerd[1455]: time="2025-08-13T07:17:23.094019606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.095093 containerd[1455]: time="2025-08-13T07:17:23.095019298Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73303204" Aug 13 07:17:23.096314 containerd[1455]: time="2025-08-13T07:17:23.096244928Z" level=info msg="ImageCreate event name:\"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.099020 containerd[1455]: time="2025-08-13T07:17:23.098974716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:23.099994 containerd[1455]: time="2025-08-13T07:17:23.099949282Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 5.496906203s" Aug 13 07:17:23.100047 containerd[1455]: time="2025-08-13T07:17:23.100000940Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 07:17:23.105083 containerd[1455]: time="2025-08-13T07:17:23.105054657Z" level=info msg="CreateContainer within sandbox \"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 07:17:23.118339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640710113.mount: Deactivated successfully. Aug 13 07:17:23.119003 containerd[1455]: time="2025-08-13T07:17:23.118961286Z" level=info msg="CreateContainer within sandbox \"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"072587db298c8e2e21edcc76504ab269136a8b39a35aee2ac259db247f9a07e3\"" Aug 13 07:17:23.119540 containerd[1455]: time="2025-08-13T07:17:23.119510696Z" level=info msg="StartContainer for \"072587db298c8e2e21edcc76504ab269136a8b39a35aee2ac259db247f9a07e3\"" Aug 13 07:17:23.199886 systemd[1]: Started cri-containerd-072587db298c8e2e21edcc76504ab269136a8b39a35aee2ac259db247f9a07e3.scope - libcontainer container 072587db298c8e2e21edcc76504ab269136a8b39a35aee2ac259db247f9a07e3. Aug 13 07:17:23.580623 containerd[1455]: time="2025-08-13T07:17:23.580561431Z" level=info msg="StartContainer for \"072587db298c8e2e21edcc76504ab269136a8b39a35aee2ac259db247f9a07e3\" returns successfully" Aug 13 07:17:23.798456 kubelet[1756]: E0813 07:17:23.798390 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:23.888819 kubelet[1756]: I0813 07:17:23.888596 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-rw5mx" podStartSLOduration=18.18895196 podStartE2EDuration="23.888573879s" podCreationTimestamp="2025-08-13 07:17:00 +0000 UTC" firstStartedPulling="2025-08-13 07:17:17.401244509 +0000 UTC m=+35.211935289" lastFinishedPulling="2025-08-13 07:17:23.100866428 +0000 UTC m=+40.911557208" observedRunningTime="2025-08-13 07:17:23.88850124 +0000 UTC m=+41.699192031" watchObservedRunningTime="2025-08-13 07:17:23.888573879 +0000 UTC m=+41.699264659" Aug 13 07:17:24.799281 kubelet[1756]: E0813 07:17:24.799234 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:25.799386 kubelet[1756]: E0813 07:17:25.799337 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:26.799821 kubelet[1756]: E0813 07:17:26.799742 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:27.800040 kubelet[1756]: E0813 07:17:27.799968 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:28.800505 kubelet[1756]: E0813 07:17:28.800457 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:29.801083 kubelet[1756]: E0813 07:17:29.801014 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:30.802154 kubelet[1756]: E0813 07:17:30.802098 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:31.040313 systemd[1]: Created slice kubepods-besteffort-poda76ac408_a9a8_4811_aec6_3ba48a1e6c6b.slice - libcontainer container kubepods-besteffort-poda76ac408_a9a8_4811_aec6_3ba48a1e6c6b.slice. Aug 13 07:17:31.232024 kubelet[1756]: I0813 07:17:31.231856 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a76ac408-a9a8-4811-aec6-3ba48a1e6c6b-data\") pod \"nfs-server-provisioner-0\" (UID: \"a76ac408-a9a8-4811-aec6-3ba48a1e6c6b\") " pod="default/nfs-server-provisioner-0" Aug 13 07:17:31.232024 kubelet[1756]: I0813 07:17:31.231918 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8hh\" (UniqueName: \"kubernetes.io/projected/a76ac408-a9a8-4811-aec6-3ba48a1e6c6b-kube-api-access-fk8hh\") pod \"nfs-server-provisioner-0\" (UID: \"a76ac408-a9a8-4811-aec6-3ba48a1e6c6b\") " pod="default/nfs-server-provisioner-0" Aug 13 07:17:31.644023 containerd[1455]: time="2025-08-13T07:17:31.643975171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a76ac408-a9a8-4811-aec6-3ba48a1e6c6b,Namespace:default,Attempt:0,}" Aug 13 07:17:31.796177 systemd-networkd[1399]: cali60e51b789ff: Link UP Aug 13 07:17:31.796625 systemd-networkd[1399]: cali60e51b789ff: Gained carrier Aug 13 07:17:31.802729 kubelet[1756]: E0813 07:17:31.802684 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.693 [INFO][3110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.146-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a76ac408-a9a8-4811-aec6-3ba48a1e6c6b 1467 0 2025-08-13 07:17:31 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.146 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.693 [INFO][3110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.718 [INFO][3123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" HandleID="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Workload="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.718 [INFO][3123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" HandleID="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Workload="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7770), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.146", "pod":"nfs-server-provisioner-0", "timestamp":"2025-08-13 07:17:31.718114588 +0000 UTC"}, Hostname:"10.0.0.146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.718 [INFO][3123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.718 [INFO][3123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.718 [INFO][3123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.146' Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.726 [INFO][3123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.730 [INFO][3123] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.734 [INFO][3123] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.736 [INFO][3123] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.738 [INFO][3123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.738 [INFO][3123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.739 [INFO][3123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4 Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.762 [INFO][3123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.791 [INFO][3123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.791 [INFO][3123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" host="10.0.0.146" Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.791 [INFO][3123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:31.836237 containerd[1455]: 2025-08-13 07:17:31.791 [INFO][3123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" HandleID="k8s-pod-network.49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Workload="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.836926 containerd[1455]: 2025-08-13 07:17:31.794 [INFO][3110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a76ac408-a9a8-4811-aec6-3ba48a1e6c6b", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:31.836926 containerd[1455]: 2025-08-13 07:17:31.794 [INFO][3110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.131/32] ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.836926 containerd[1455]: 2025-08-13 07:17:31.794 [INFO][3110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.836926 containerd[1455]: 2025-08-13 07:17:31.797 [INFO][3110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.837080 containerd[1455]: 2025-08-13 07:17:31.798 [INFO][3110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a76ac408-a9a8-4811-aec6-3ba48a1e6c6b", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"4a:2c:35:77:a3:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:31.837080 containerd[1455]: 2025-08-13 07:17:31.833 [INFO][3110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.146-k8s-nfs--server--provisioner--0-eth0" Aug 13 07:17:31.883883 containerd[1455]: time="2025-08-13T07:17:31.883709280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:31.883883 containerd[1455]: time="2025-08-13T07:17:31.883832772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:31.883883 containerd[1455]: time="2025-08-13T07:17:31.883845748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.884087 containerd[1455]: time="2025-08-13T07:17:31.883938312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.905926 systemd[1]: Started cri-containerd-49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4.scope - libcontainer container 49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4. Aug 13 07:17:31.917641 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:31.941310 containerd[1455]: time="2025-08-13T07:17:31.941267698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a76ac408-a9a8-4811-aec6-3ba48a1e6c6b,Namespace:default,Attempt:0,} returns sandbox id \"49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4\"" Aug 13 07:17:31.943167 containerd[1455]: time="2025-08-13T07:17:31.943139689Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Aug 13 07:17:32.803571 kubelet[1756]: E0813 07:17:32.803498 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:33.759941 systemd-networkd[1399]: cali60e51b789ff: Gained IPv6LL Aug 13 07:17:33.804239 kubelet[1756]: E0813 07:17:33.804166 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:34.519637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522220605.mount: Deactivated successfully. Aug 13 07:17:34.805288 kubelet[1756]: E0813 07:17:34.805159 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:35.805767 kubelet[1756]: E0813 07:17:35.805719 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:36.344131 containerd[1455]: time="2025-08-13T07:17:36.344057374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:36.344883 containerd[1455]: time="2025-08-13T07:17:36.344847612Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Aug 13 07:17:36.346136 containerd[1455]: time="2025-08-13T07:17:36.346103917Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:36.348848 containerd[1455]: time="2025-08-13T07:17:36.348806877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:36.349915 containerd[1455]: time="2025-08-13T07:17:36.349881741Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.406704161s" Aug 13 07:17:36.349969 containerd[1455]: time="2025-08-13T07:17:36.349914111Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Aug 13 07:17:36.355255 containerd[1455]: time="2025-08-13T07:17:36.355162494Z" level=info msg="CreateContainer within sandbox \"49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Aug 13 07:17:36.370431 containerd[1455]: time="2025-08-13T07:17:36.370388563Z" level=info msg="CreateContainer within sandbox \"49e7bdfc6659feb111060f3e2bd8d6a0b13b8f088157868aa0322e1aceab69c4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3eb5741126806174df01b284c8cfe6ae818f642e1c26633934c053b17dc072c7\"" Aug 13 07:17:36.371296 containerd[1455]: time="2025-08-13T07:17:36.371260665Z" level=info msg="StartContainer for \"3eb5741126806174df01b284c8cfe6ae818f642e1c26633934c053b17dc072c7\"" Aug 13 07:17:36.402909 systemd[1]: Started cri-containerd-3eb5741126806174df01b284c8cfe6ae818f642e1c26633934c053b17dc072c7.scope - libcontainer container 3eb5741126806174df01b284c8cfe6ae818f642e1c26633934c053b17dc072c7. Aug 13 07:17:36.432286 containerd[1455]: time="2025-08-13T07:17:36.432150462Z" level=info msg="StartContainer for \"3eb5741126806174df01b284c8cfe6ae818f642e1c26633934c053b17dc072c7\" returns successfully" Aug 13 07:17:36.806585 kubelet[1756]: E0813 07:17:36.806491 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:37.806709 kubelet[1756]: E0813 07:17:37.806657 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:38.035933 kubelet[1756]: I0813 07:17:38.035863 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.627698328 podStartE2EDuration="7.035838415s" podCreationTimestamp="2025-08-13 07:17:31 +0000 UTC" firstStartedPulling="2025-08-13 07:17:31.9427432 +0000 UTC m=+49.753433980" lastFinishedPulling="2025-08-13 07:17:36.350883287 +0000 UTC m=+54.161574067" observedRunningTime="2025-08-13 07:17:36.917504525 +0000 UTC m=+54.728195315" watchObservedRunningTime="2025-08-13 07:17:38.035838415 +0000 UTC m=+55.846529195" Aug 13 07:17:38.807826 kubelet[1756]: E0813 07:17:38.807723 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:39.808513 kubelet[1756]: E0813 07:17:39.808447 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:40.809467 kubelet[1756]: E0813 07:17:40.809393 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:41.692086 systemd[1]: Created slice kubepods-besteffort-podf0e06e52_bbb4_41b3_a53f_e387d45aebb6.slice - libcontainer container kubepods-besteffort-podf0e06e52_bbb4_41b3_a53f_e387d45aebb6.slice. Aug 13 07:17:41.789262 kubelet[1756]: I0813 07:17:41.789158 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2532ac00-6f76-4d4a-8a0c-540344f6fa4b\" (UniqueName: \"kubernetes.io/nfs/f0e06e52-bbb4-41b3-a53f-e387d45aebb6-pvc-2532ac00-6f76-4d4a-8a0c-540344f6fa4b\") pod \"test-pod-1\" (UID: \"f0e06e52-bbb4-41b3-a53f-e387d45aebb6\") " pod="default/test-pod-1" Aug 13 07:17:41.789262 kubelet[1756]: I0813 07:17:41.789238 1756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfvh\" (UniqueName: \"kubernetes.io/projected/f0e06e52-bbb4-41b3-a53f-e387d45aebb6-kube-api-access-zcfvh\") pod \"test-pod-1\" (UID: \"f0e06e52-bbb4-41b3-a53f-e387d45aebb6\") " pod="default/test-pod-1" Aug 13 07:17:41.810570 kubelet[1756]: E0813 07:17:41.810411 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:41.918791 kernel: FS-Cache: Loaded Aug 13 07:17:41.997071 kernel: RPC: Registered named UNIX socket transport module. Aug 13 07:17:41.997269 kernel: RPC: Registered udp transport module. Aug 13 07:17:41.997316 kernel: RPC: Registered tcp transport module. Aug 13 07:17:41.997368 kernel: RPC: Registered tcp-with-tls transport module. Aug 13 07:17:41.997858 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Aug 13 07:17:42.315948 kernel: NFS: Registering the id_resolver key type Aug 13 07:17:42.316132 kernel: Key type id_resolver registered Aug 13 07:17:42.316182 kernel: Key type id_legacy registered Aug 13 07:17:42.346177 nfsidmap[3327]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Aug 13 07:17:42.351593 nfsidmap[3330]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Aug 13 07:17:42.597934 containerd[1455]: time="2025-08-13T07:17:42.597667747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f0e06e52-bbb4-41b3-a53f-e387d45aebb6,Namespace:default,Attempt:0,}" Aug 13 07:17:42.772042 kubelet[1756]: E0813 07:17:42.771970 1756 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:42.797626 containerd[1455]: time="2025-08-13T07:17:42.797567659Z" level=info msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" Aug 13 07:17:42.812939 kubelet[1756]: E0813 07:17:42.812864 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:42.904098 systemd-networkd[1399]: cali5ec59c6bf6e: Link UP Aug 13 07:17:42.904712 systemd-networkd[1399]: cali5ec59c6bf6e: Gained carrier Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.836 [WARNING][3356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-csi--node--driver--f2lmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f3ac557-6d90-4338-ba26-8876dbe35bc7", ResourceVersion:"1407", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d", Pod:"csi-node-driver-f2lmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dd1eb88340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.836 [INFO][3356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.836 [INFO][3356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" iface="eth0" netns="" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.836 [INFO][3356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.836 [INFO][3356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.861 [INFO][3369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.862 [INFO][3369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.896 [INFO][3369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.907 [WARNING][3369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.907 [INFO][3369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.909 [INFO][3369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:42.915059 containerd[1455]: 2025-08-13 07:17:42.912 [INFO][3356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:42.915603 containerd[1455]: time="2025-08-13T07:17:42.915106273Z" level=info msg="TearDown network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" successfully" Aug 13 07:17:42.915603 containerd[1455]: time="2025-08-13T07:17:42.915143613Z" level=info msg="StopPodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" returns successfully" Aug 13 07:17:42.915962 containerd[1455]: time="2025-08-13T07:17:42.915926104Z" level=info msg="RemovePodSandbox for \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" Aug 13 07:17:42.916275 containerd[1455]: time="2025-08-13T07:17:42.916251306Z" level=info msg="Forcibly stopping sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\"" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.822 [INFO][3334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.146-k8s-test--pod--1-eth0 default f0e06e52-bbb4-41b3-a53f-e387d45aebb6 1527 0 2025-08-13 07:17:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.146 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.822 [INFO][3334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.861 [INFO][3365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" HandleID="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Workload="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.861 [INFO][3365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" HandleID="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Workload="10.0.0.146-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001334f0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.146", "pod":"test-pod-1", "timestamp":"2025-08-13 07:17:42.861686205 +0000 UTC"}, Hostname:"10.0.0.146", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.861 [INFO][3365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.861 [INFO][3365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.862 [INFO][3365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.146' Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.869 [INFO][3365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.875 [INFO][3365] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.880 [INFO][3365] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.882 [INFO][3365] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.884 [INFO][3365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.884 [INFO][3365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.886 [INFO][3365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07 Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.890 [INFO][3365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.896 [INFO][3365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.896 [INFO][3365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" host="10.0.0.146" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.896 [INFO][3365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.896 [INFO][3365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" HandleID="k8s-pod-network.942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Workload="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.916441 containerd[1455]: 2025-08-13 07:17:42.899 [INFO][3334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f0e06e52-bbb4-41b3-a53f-e387d45aebb6", ResourceVersion:"1527", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:42.917582 containerd[1455]: 2025-08-13 07:17:42.899 [INFO][3334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.132/32] ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.917582 containerd[1455]: 2025-08-13 07:17:42.899 [INFO][3334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.917582 containerd[1455]: 2025-08-13 07:17:42.905 [INFO][3334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.917582 containerd[1455]: 2025-08-13 07:17:42.905 [INFO][3334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f0e06e52-bbb4-41b3-a53f-e387d45aebb6", ResourceVersion:"1527", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"1a:b8:8f:65:aa:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:42.917582 containerd[1455]: 2025-08-13 07:17:42.913 [INFO][3334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.146-k8s-test--pod--1-eth0" Aug 13 07:17:42.940647 containerd[1455]: time="2025-08-13T07:17:42.940296089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:42.940647 containerd[1455]: time="2025-08-13T07:17:42.940381248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:42.940647 containerd[1455]: time="2025-08-13T07:17:42.940406445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:42.940647 containerd[1455]: time="2025-08-13T07:17:42.940557590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:42.972020 systemd[1]: Started cri-containerd-942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07.scope - libcontainer container 942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07. Aug 13 07:17:42.989202 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.963 [WARNING][3399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-csi--node--driver--f2lmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f3ac557-6d90-4338-ba26-8876dbe35bc7", ResourceVersion:"1407", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"3e59448d93796d7325df8226d5dd8c60c0e36b59adc541f5d116093ed0a04e8d", Pod:"csi-node-driver-f2lmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0dd1eb88340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.963 [INFO][3399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.963 [INFO][3399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" iface="eth0" netns="" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.963 [INFO][3399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.963 [INFO][3399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.989 [INFO][3437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.989 [INFO][3437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.989 [INFO][3437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.997 [WARNING][3437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.997 [INFO][3437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" HandleID="k8s-pod-network.930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Workload="10.0.0.146-k8s-csi--node--driver--f2lmn-eth0" Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:42.998 [INFO][3437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.003933 containerd[1455]: 2025-08-13 07:17:43.001 [INFO][3399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9" Aug 13 07:17:43.004414 containerd[1455]: time="2025-08-13T07:17:43.003993661Z" level=info msg="TearDown network for sandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" successfully" Aug 13 07:17:43.013213 containerd[1455]: time="2025-08-13T07:17:43.013135438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.013362 containerd[1455]: time="2025-08-13T07:17:43.013225086Z" level=info msg="RemovePodSandbox \"930a9eacfc507829d8533a318d8922feebc93ccb64a9844106dc61392de4c8f9\" returns successfully" Aug 13 07:17:43.013964 containerd[1455]: time="2025-08-13T07:17:43.013925553Z" level=info msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" Aug 13 07:17:43.020880 containerd[1455]: time="2025-08-13T07:17:43.020826457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f0e06e52-bbb4-41b3-a53f-e387d45aebb6,Namespace:default,Attempt:0,} returns sandbox id \"942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07\"" Aug 13 07:17:43.022411 containerd[1455]: time="2025-08-13T07:17:43.022386150Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.049 [WARNING][3468] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"16528fd6-de3d-4f21-ac9c-1a1808f49a2d", ResourceVersion:"1423", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b", Pod:"nginx-deployment-7fcdb87857-rw5mx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8e04822713b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.049 [INFO][3468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.049 [INFO][3468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" iface="eth0" netns="" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.049 [INFO][3468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.049 [INFO][3468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.069 [INFO][3477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.070 [INFO][3477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.070 [INFO][3477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.076 [WARNING][3477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.076 [INFO][3477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.078 [INFO][3477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.084072 containerd[1455]: 2025-08-13 07:17:43.081 [INFO][3468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.084571 containerd[1455]: time="2025-08-13T07:17:43.084116873Z" level=info msg="TearDown network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" successfully" Aug 13 07:17:43.084571 containerd[1455]: time="2025-08-13T07:17:43.084154485Z" level=info msg="StopPodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" returns successfully" Aug 13 07:17:43.084841 containerd[1455]: time="2025-08-13T07:17:43.084801611Z" level=info msg="RemovePodSandbox for \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" Aug 13 07:17:43.084887 containerd[1455]: time="2025-08-13T07:17:43.084850693Z" level=info msg="Forcibly stopping sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\"" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.129 [WARNING][3494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"16528fd6-de3d-4f21-ac9c-1a1808f49a2d", ResourceVersion:"1423", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.146", ContainerID:"f50f1be09d3ee84ffa46c1e601bf5809754c47092b301a54d7cabd8e0543d16b", Pod:"nginx-deployment-7fcdb87857-rw5mx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8e04822713b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.129 [INFO][3494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.129 [INFO][3494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" iface="eth0" netns="" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.129 [INFO][3494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.129 [INFO][3494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.157 [INFO][3502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.158 [INFO][3502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.158 [INFO][3502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.165 [WARNING][3502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.165 [INFO][3502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" HandleID="k8s-pod-network.77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Workload="10.0.0.146-k8s-nginx--deployment--7fcdb87857--rw5mx-eth0" Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.166 [INFO][3502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.171957 containerd[1455]: 2025-08-13 07:17:43.169 [INFO][3494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7" Aug 13 07:17:43.171957 containerd[1455]: time="2025-08-13T07:17:43.171906540Z" level=info msg="TearDown network for sandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" successfully" Aug 13 07:17:43.176330 containerd[1455]: time="2025-08-13T07:17:43.176270253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.176330 containerd[1455]: time="2025-08-13T07:17:43.176328693Z" level=info msg="RemovePodSandbox \"77f8f457e75d919367462370770a0c839aa390ab526f9b5caced21289366f9b7\" returns successfully" Aug 13 07:17:43.473312 containerd[1455]: time="2025-08-13T07:17:43.473004397Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:43.473963 containerd[1455]: time="2025-08-13T07:17:43.473901685Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Aug 13 07:17:43.478341 containerd[1455]: time="2025-08-13T07:17:43.478294143Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985\", size \"73303082\" in 455.866224ms" Aug 13 07:17:43.478430 containerd[1455]: time="2025-08-13T07:17:43.478340439Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f36b8965af58ac17c6fcb27d986b37161ceb26b3d41d3cd53f232b0e16761305\"" Aug 13 07:17:43.484337 containerd[1455]: time="2025-08-13T07:17:43.484298571Z" level=info msg="CreateContainer within sandbox \"942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07\" for container &ContainerMetadata{Name:test,Attempt:0,}" Aug 13 07:17:43.496606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482208610.mount: Deactivated successfully. Aug 13 07:17:43.498123 containerd[1455]: time="2025-08-13T07:17:43.498070101Z" level=info msg="CreateContainer within sandbox \"942a8cac754d81fd3b73d5d3d252f498f92d699ce585a710d6819d6cdfbbba07\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"efe29e7ca139a3c66be48d2635ba67e3704b7cfa62db74957836a7da7e26e0fd\"" Aug 13 07:17:43.498981 containerd[1455]: time="2025-08-13T07:17:43.498921933Z" level=info msg="StartContainer for \"efe29e7ca139a3c66be48d2635ba67e3704b7cfa62db74957836a7da7e26e0fd\"" Aug 13 07:17:43.530004 systemd[1]: Started cri-containerd-efe29e7ca139a3c66be48d2635ba67e3704b7cfa62db74957836a7da7e26e0fd.scope - libcontainer container efe29e7ca139a3c66be48d2635ba67e3704b7cfa62db74957836a7da7e26e0fd. Aug 13 07:17:43.561717 containerd[1455]: time="2025-08-13T07:17:43.561649923Z" level=info msg="StartContainer for \"efe29e7ca139a3c66be48d2635ba67e3704b7cfa62db74957836a7da7e26e0fd\" returns successfully" Aug 13 07:17:43.813916 kubelet[1756]: E0813 07:17:43.813866 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:44.050766 kubelet[1756]: I0813 07:17:44.050686 1756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.593410538 podStartE2EDuration="13.050667637s" podCreationTimestamp="2025-08-13 07:17:31 +0000 UTC" firstStartedPulling="2025-08-13 07:17:43.022019851 +0000 UTC m=+60.832710631" lastFinishedPulling="2025-08-13 07:17:43.47927695 +0000 UTC m=+61.289967730" observedRunningTime="2025-08-13 07:17:44.050236577 +0000 UTC m=+61.860927367" watchObservedRunningTime="2025-08-13 07:17:44.050667637 +0000 UTC m=+61.861358418" Aug 13 07:17:44.511966 systemd-networkd[1399]: cali5ec59c6bf6e: Gained IPv6LL Aug 13 07:17:44.814632 kubelet[1756]: E0813 07:17:44.814484 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:45.815737 kubelet[1756]: E0813 07:17:45.815670 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:46.816147 kubelet[1756]: E0813 07:17:46.816069 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:47.816335 kubelet[1756]: E0813 07:17:47.816277 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:48.817230 kubelet[1756]: E0813 07:17:48.817138 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 07:17:49.817494 kubelet[1756]: E0813 07:17:49.817431 1756 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"