May 16 00:09:49.060027 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 16 00:09:49.060057 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:09:49.060068 kernel: BIOS-provided physical RAM map: May 16 00:09:49.060075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 00:09:49.060083 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 00:09:49.060090 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 00:09:49.060100 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 16 00:09:49.060107 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 16 00:09:49.060118 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 00:09:49.060125 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 00:09:49.060133 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:09:49.060141 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 00:09:49.060148 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:09:49.060155 kernel: NX (Execute Disable) protection: active May 16 00:09:49.060167 kernel: APIC: Static calls initialized May 16 00:09:49.060175 kernel: SMBIOS 3.0.0 present. May 16 00:09:49.060182 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 16 00:09:49.060190 kernel: Hypervisor detected: KVM May 16 00:09:49.060199 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:09:49.060207 kernel: kvm-clock: using sched offset of 3179671712 cycles May 16 00:09:49.060217 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:09:49.060226 kernel: tsc: Detected 2495.310 MHz processor May 16 00:09:49.060235 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:09:49.060247 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:09:49.060256 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 16 00:09:49.060265 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 00:09:49.060273 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:09:49.060281 kernel: Using GB pages for direct mapping May 16 00:09:49.060289 kernel: ACPI: Early table checksum verification disabled May 16 00:09:49.060297 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 16 00:09:49.060305 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.060315 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.060326 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.060335 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 16 00:09:49.062786 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.062819 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.062829 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.062839 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:09:49.062848 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] May 16 00:09:49.062858 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] May 16 00:09:49.062884 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 16 00:09:49.062894 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] May 16 00:09:49.062904 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] May 16 00:09:49.062914 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] May 16 00:09:49.062924 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] May 16 00:09:49.062935 kernel: No NUMA configuration found May 16 00:09:49.062948 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 16 00:09:49.062959 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 16 00:09:49.062970 kernel: Zone ranges: May 16 00:09:49.062980 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:09:49.062990 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 16 00:09:49.062999 kernel: Normal empty May 16 00:09:49.063009 kernel: Movable zone start for each node May 16 00:09:49.063017 kernel: Early memory node ranges May 16 00:09:49.063027 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 00:09:49.063059 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 16 00:09:49.063073 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 16 00:09:49.063083 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:09:49.063093 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 00:09:49.063102 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 00:09:49.063111 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:09:49.063121 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:09:49.063130 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:09:49.063139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:09:49.063147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:09:49.063159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:09:49.063168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:09:49.063177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:09:49.063192 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:09:49.063201 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:09:49.063209 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 00:09:49.063217 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:09:49.063226 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 00:09:49.063234 kernel: Booting paravirtualized kernel on KVM May 16 00:09:49.063246 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:09:49.063256 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 00:09:49.063266 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 00:09:49.063277 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 00:09:49.063287 kernel: pcpu-alloc: [0] 0 1 May 16 00:09:49.063296 kernel: kvm-guest: PV spinlocks disabled, no host support May 16 00:09:49.063308 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:09:49.063319 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:09:49.063332 kernel: random: crng init done May 16 00:09:49.063367 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:09:49.063379 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 16 00:09:49.063389 kernel: Fallback order for Node 0: 0 May 16 00:09:49.063400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 16 00:09:49.063410 kernel: Policy zone: DMA32 May 16 00:09:49.063420 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:09:49.063431 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 125152K reserved, 0K cma-reserved) May 16 00:09:49.063440 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 00:09:49.063454 kernel: ftrace: allocating 37950 entries in 149 pages May 16 00:09:49.063464 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:09:49.063474 kernel: Dynamic Preempt: voluntary May 16 00:09:49.063484 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:09:49.063495 kernel: rcu: RCU event tracing is enabled. May 16 00:09:49.063506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 00:09:49.063516 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:09:49.063526 kernel: Rude variant of Tasks RCU enabled. May 16 00:09:49.063536 kernel: Tracing variant of Tasks RCU enabled. May 16 00:09:49.063546 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:09:49.063560 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 00:09:49.063571 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 00:09:49.063584 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:09:49.063595 kernel: Console: colour VGA+ 80x25 May 16 00:09:49.063605 kernel: printk: console [tty0] enabled May 16 00:09:49.063614 kernel: printk: console [ttyS0] enabled May 16 00:09:49.063623 kernel: ACPI: Core revision 20230628 May 16 00:09:49.063633 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:09:49.063643 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:09:49.063657 kernel: x2apic enabled May 16 00:09:49.063666 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:09:49.063676 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:09:49.063686 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:09:49.063696 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) May 16 00:09:49.063706 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:09:49.063716 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:09:49.063727 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:09:49.063748 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:09:49.063758 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:09:49.063769 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:09:49.063780 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:09:49.063793 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:09:49.063817 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:09:49.063827 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:09:49.063837 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:09:49.063848 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:09:49.063863 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:09:49.063873 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:09:49.063883 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:09:49.063894 kernel: Freeing SMP alternatives memory: 32K May 16 00:09:49.063904 kernel: pid_max: default: 32768 minimum: 301 May 16 00:09:49.063914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:09:49.063924 kernel: landlock: Up and running. May 16 00:09:49.063935 kernel: SELinux: Initializing. May 16 00:09:49.063948 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:09:49.063958 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 16 00:09:49.063968 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:09:49.063979 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:09:49.063989 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:09:49.064000 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 00:09:49.064011 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:09:49.064023 kernel: ... version: 0 May 16 00:09:49.064033 kernel: ... bit width: 48 May 16 00:09:49.064046 kernel: ... generic registers: 6 May 16 00:09:49.064057 kernel: ... value mask: 0000ffffffffffff May 16 00:09:49.064067 kernel: ... max period: 00007fffffffffff May 16 00:09:49.064077 kernel: ... fixed-purpose events: 0 May 16 00:09:49.064088 kernel: ... event mask: 000000000000003f May 16 00:09:49.064098 kernel: signal: max sigframe size: 1776 May 16 00:09:49.064108 kernel: rcu: Hierarchical SRCU implementation. May 16 00:09:49.064118 kernel: rcu: Max phase no-delay instances is 400. May 16 00:09:49.064129 kernel: smp: Bringing up secondary CPUs ... May 16 00:09:49.064143 kernel: smpboot: x86: Booting SMP configuration: May 16 00:09:49.064154 kernel: .... node #0, CPUs: #1 May 16 00:09:49.064164 kernel: smp: Brought up 1 node, 2 CPUs May 16 00:09:49.064175 kernel: smpboot: Max logical packages: 1 May 16 00:09:49.064186 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 16 00:09:49.064197 kernel: devtmpfs: initialized May 16 00:09:49.064207 kernel: x86/mm: Memory block size: 128MB May 16 00:09:49.064218 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:09:49.064229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 00:09:49.064243 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:09:49.064253 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:09:49.064264 kernel: audit: initializing netlink subsys (disabled) May 16 00:09:49.064275 kernel: audit: type=2000 audit(1747354187.272:1): state=initialized audit_enabled=0 res=1 May 16 00:09:49.064285 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:09:49.064296 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:09:49.064306 kernel: cpuidle: using governor menu May 16 00:09:49.064316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:09:49.064326 kernel: dca service started, version 1.12.1 May 16 00:09:49.064340 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 16 00:09:49.065508 kernel: PCI: Using configuration type 1 for base access May 16 00:09:49.065519 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:09:49.065530 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:09:49.065541 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:09:49.065554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:09:49.065565 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:09:49.065576 kernel: ACPI: Added _OSI(Module Device) May 16 00:09:49.065586 kernel: ACPI: Added _OSI(Processor Device) May 16 00:09:49.065600 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:09:49.065610 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:09:49.065620 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:09:49.065629 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:09:49.065639 kernel: ACPI: Interpreter enabled May 16 00:09:49.065649 kernel: ACPI: PM: (supports S0 S5) May 16 00:09:49.065658 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:09:49.065668 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:09:49.065677 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:09:49.065689 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:09:49.065698 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:09:49.065935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:09:49.066050 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:09:49.066155 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:09:49.066170 kernel: PCI host bridge to bus 0000:00 May 16 00:09:49.066281 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:09:49.068336 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:09:49.068502 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:09:49.068605 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 16 00:09:49.068708 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 00:09:49.068828 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 00:09:49.068921 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:09:49.069049 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:09:49.069188 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 16 00:09:49.069299 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 16 00:09:49.070064 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 16 00:09:49.070175 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 16 00:09:49.070282 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 16 00:09:49.071327 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:09:49.071503 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.071623 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 16 00:09:49.071747 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.071874 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 16 00:09:49.071994 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.072104 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 16 00:09:49.072225 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.073375 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 16 00:09:49.073525 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.073643 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 16 00:09:49.073762 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.073890 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 16 00:09:49.074012 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.074129 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 16 00:09:49.074245 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.076426 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 16 00:09:49.076587 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 16 00:09:49.076705 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 16 00:09:49.076840 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:09:49.076958 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:09:49.077078 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:09:49.077189 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 16 00:09:49.077294 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 16 00:09:49.077562 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:09:49.077671 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 16 00:09:49.077775 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 16 00:09:49.077895 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 16 00:09:49.078003 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 16 00:09:49.078115 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 16 00:09:49.078226 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 16 00:09:49.078331 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 16 00:09:49.078460 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 16 00:09:49.078588 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 16 00:09:49.078710 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 16 00:09:49.078841 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 16 00:09:49.078952 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 16 00:09:49.079062 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 16 00:09:49.079193 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 16 00:09:49.079317 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 16 00:09:49.083478 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 16 00:09:49.083610 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 16 00:09:49.083717 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 16 00:09:49.083832 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 16 00:09:49.083953 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 16 00:09:49.084056 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 16 00:09:49.084159 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 16 00:09:49.084277 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 16 00:09:49.084440 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 16 00:09:49.084574 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 16 00:09:49.084692 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 16 00:09:49.084823 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 16 00:09:49.084940 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 16 00:09:49.085051 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 16 00:09:49.085170 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 16 00:09:49.085299 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 16 00:09:49.085487 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 16 00:09:49.085603 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 16 00:09:49.085721 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 16 00:09:49.085851 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 16 00:09:49.085964 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 16 00:09:49.085981 kernel: acpiphp: Slot [0] registered May 16 00:09:49.086118 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 16 00:09:49.086243 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 16 00:09:49.086385 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 16 00:09:49.086508 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 16 00:09:49.086632 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 16 00:09:49.086752 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 16 00:09:49.086884 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 16 00:09:49.086907 kernel: acpiphp: Slot [0-2] registered May 16 00:09:49.087024 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 16 00:09:49.087136 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 16 00:09:49.087250 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 16 00:09:49.087266 kernel: acpiphp: Slot [0-3] registered May 16 00:09:49.089029 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 16 00:09:49.089175 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 16 00:09:49.089286 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 16 00:09:49.089304 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:09:49.089322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:09:49.089331 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:09:49.089341 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:09:49.089450 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:09:49.089460 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:09:49.089471 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:09:49.089481 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:09:49.089490 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:09:49.089500 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:09:49.089514 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:09:49.089524 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:09:49.089534 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:09:49.089544 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:09:49.089554 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:09:49.089563 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:09:49.089573 kernel: iommu: Default domain type: Translated May 16 00:09:49.089583 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:09:49.089593 kernel: PCI: Using ACPI for IRQ routing May 16 00:09:49.089606 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:09:49.089615 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 00:09:49.089625 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 16 00:09:49.089738 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:09:49.089861 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:09:49.089968 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:09:49.089984 kernel: vgaarb: loaded May 16 00:09:49.089995 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:09:49.090010 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:09:49.090021 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:09:49.090031 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:09:49.090041 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:09:49.090051 kernel: pnp: PnP ACPI init May 16 00:09:49.090172 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 00:09:49.090188 kernel: pnp: PnP ACPI: found 5 devices May 16 00:09:49.090199 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:09:49.090213 kernel: NET: Registered PF_INET protocol family May 16 00:09:49.090222 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:09:49.090232 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 16 00:09:49.090241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:09:49.090251 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 16 00:09:49.090260 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 16 00:09:49.090270 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 16 00:09:49.090279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:09:49.090289 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 16 00:09:49.090301 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:09:49.090310 kernel: NET: Registered PF_XDP protocol family May 16 00:09:49.090435 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 16 00:09:49.090539 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 16 00:09:49.090639 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 16 00:09:49.090733 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 16 00:09:49.090840 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 16 00:09:49.090939 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 16 00:09:49.091046 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 16 00:09:49.091142 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 16 00:09:49.091237 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 16 00:09:49.091334 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 16 00:09:49.091432 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 16 00:09:49.091519 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 16 00:09:49.091596 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 16 00:09:49.091690 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 16 00:09:49.091787 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 16 00:09:49.091908 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 16 00:09:49.092001 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 16 00:09:49.092097 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 16 00:09:49.092194 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 16 00:09:49.092289 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 16 00:09:49.092412 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 16 00:09:49.092534 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 16 00:09:49.092638 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 16 00:09:49.092737 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 16 00:09:49.092852 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 16 00:09:49.092955 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 16 00:09:49.093060 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 16 00:09:49.093168 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 16 00:09:49.093280 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 16 00:09:49.093401 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 16 00:09:49.093500 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 16 00:09:49.093608 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 16 00:09:49.093716 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 16 00:09:49.093839 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 16 00:09:49.093946 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 16 00:09:49.094055 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 16 00:09:49.094157 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:09:49.094247 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:09:49.094332 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:09:49.094512 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 16 00:09:49.094600 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 00:09:49.094688 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 00:09:49.094814 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 16 00:09:49.094913 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 16 00:09:49.095013 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 16 00:09:49.095107 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 16 00:09:49.095214 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 16 00:09:49.095306 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 16 00:09:49.095561 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 16 00:09:49.095650 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 16 00:09:49.095753 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 16 00:09:49.095859 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 16 00:09:49.095953 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 16 00:09:49.096048 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 16 00:09:49.096144 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 16 00:09:49.096232 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 16 00:09:49.096321 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 16 00:09:49.096443 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 16 00:09:49.096537 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 16 00:09:49.096626 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 16 00:09:49.096741 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 16 00:09:49.096852 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 16 00:09:49.096945 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 16 00:09:49.096962 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:09:49.096973 kernel: PCI: CLS 0 bytes, default 64 May 16 00:09:49.096983 kernel: Initialise system trusted keyrings May 16 00:09:49.096994 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 16 00:09:49.097005 kernel: Key type asymmetric registered May 16 00:09:49.097019 kernel: Asymmetric key parser 'x509' registered May 16 00:09:49.097029 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:09:49.097040 kernel: io scheduler mq-deadline registered May 16 00:09:49.097050 kernel: io scheduler kyber registered May 16 00:09:49.097061 kernel: io scheduler bfq registered May 16 00:09:49.097172 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 16 00:09:49.097274 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 16 00:09:49.097444 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 16 00:09:49.097545 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 16 00:09:49.097630 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 16 00:09:49.097701 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 16 00:09:49.097775 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 16 00:09:49.097867 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 16 00:09:49.097942 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 16 00:09:49.098014 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 16 00:09:49.098088 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 16 00:09:49.098160 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 16 00:09:49.098239 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 16 00:09:49.098312 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 16 00:09:49.098407 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 16 00:09:49.098496 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 16 00:09:49.098508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:09:49.098581 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 16 00:09:49.098658 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 16 00:09:49.098668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:09:49.098679 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 16 00:09:49.098686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:09:49.098694 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:09:49.098701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:09:49.098709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:09:49.098717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:09:49.098812 kernel: rtc_cmos 00:03: RTC can wake from S4 May 16 00:09:49.098824 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:09:49.098892 kernel: rtc_cmos 00:03: registered as rtc0 May 16 00:09:49.098961 kernel: rtc_cmos 00:03: setting system clock to 2025-05-16T00:09:48 UTC (1747354188) May 16 00:09:49.099025 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 00:09:49.099035 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:09:49.099043 kernel: NET: Registered PF_INET6 protocol family May 16 00:09:49.099050 kernel: Segment Routing with IPv6 May 16 00:09:49.099058 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:09:49.099065 kernel: NET: Registered PF_PACKET protocol family May 16 00:09:49.099072 kernel: Key type dns_resolver registered May 16 00:09:49.099082 kernel: IPI shorthand broadcast: enabled May 16 00:09:49.099089 kernel: sched_clock: Marking stable (1309011630, 144841713)->(1466726011, -12872668) May 16 00:09:49.099098 kernel: registered taskstats version 1 May 16 00:09:49.099105 kernel: Loading compiled-in X.509 certificates May 16 00:09:49.099112 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 16 00:09:49.099120 kernel: Key type .fscrypt registered May 16 00:09:49.099127 kernel: Key type fscrypt-provisioning registered May 16 00:09:49.099134 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:09:49.099143 kernel: ima: Allocated hash algorithm: sha1 May 16 00:09:49.099150 kernel: ima: No architecture policies found May 16 00:09:49.099157 kernel: clk: Disabling unused clocks May 16 00:09:49.099165 kernel: Freeing unused kernel image (initmem) memory: 42988K May 16 00:09:49.099172 kernel: Write protecting the kernel read-only data: 36864k May 16 00:09:49.099179 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 16 00:09:49.099186 kernel: Run /init as init process May 16 00:09:49.099193 kernel: with arguments: May 16 00:09:49.099200 kernel: /init May 16 00:09:49.099207 kernel: with environment: May 16 00:09:49.099216 kernel: HOME=/ May 16 00:09:49.099223 kernel: TERM=linux May 16 00:09:49.099230 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:09:49.099240 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:09:49.099250 systemd[1]: Detected virtualization kvm. May 16 00:09:49.099258 systemd[1]: Detected architecture x86-64. May 16 00:09:49.099265 systemd[1]: Running in initrd. May 16 00:09:49.099274 systemd[1]: No hostname configured, using default hostname. May 16 00:09:49.099282 systemd[1]: Hostname set to . May 16 00:09:49.099290 systemd[1]: Initializing machine ID from VM UUID. May 16 00:09:49.099297 systemd[1]: Queued start job for default target initrd.target. May 16 00:09:49.099305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:09:49.099312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:09:49.099321 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:09:49.099329 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:09:49.099338 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:09:49.099423 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:09:49.099435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:09:49.099446 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:09:49.099457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:09:49.099468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:09:49.099479 systemd[1]: Reached target paths.target - Path Units. May 16 00:09:49.099493 systemd[1]: Reached target slices.target - Slice Units. May 16 00:09:49.099502 systemd[1]: Reached target swap.target - Swaps. May 16 00:09:49.099509 systemd[1]: Reached target timers.target - Timer Units. May 16 00:09:49.099517 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:09:49.099524 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:09:49.099532 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:09:49.099540 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:09:49.099547 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:09:49.099555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:09:49.099564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:09:49.099572 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:09:49.099579 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:09:49.099587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:09:49.099595 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:09:49.099603 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:09:49.099611 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:09:49.099618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:09:49.099627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:09:49.099635 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:09:49.099666 systemd-journald[188]: Collecting audit messages is disabled. May 16 00:09:49.099688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:09:49.099697 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:09:49.099706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:09:49.099714 systemd-journald[188]: Journal started May 16 00:09:49.099734 systemd-journald[188]: Runtime Journal (/run/log/journal/48bbcab454384b5f8e3810587bd08ffc) is 4.8M, max 38.4M, 33.6M free. May 16 00:09:49.060027 systemd-modules-load[189]: Inserted module 'overlay' May 16 00:09:49.135822 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:09:49.135850 kernel: Bridge firewalling registered May 16 00:09:49.108597 systemd-modules-load[189]: Inserted module 'br_netfilter' May 16 00:09:49.145382 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:09:49.145838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:09:49.147262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:49.154634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:09:49.157702 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:09:49.160862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:09:49.163517 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:09:49.174029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:09:49.186417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:09:49.189682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:09:49.192709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:09:49.193783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:09:49.199542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:09:49.203494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:09:49.215045 dracut-cmdline[222]: dracut-dracut-053 May 16 00:09:49.218380 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:09:49.232067 systemd-resolved[224]: Positive Trust Anchors: May 16 00:09:49.232088 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:09:49.232127 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:09:49.236340 systemd-resolved[224]: Defaulting to hostname 'linux'. May 16 00:09:49.243612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:09:49.244583 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:09:49.285413 kernel: SCSI subsystem initialized May 16 00:09:49.296389 kernel: Loading iSCSI transport class v2.0-870. May 16 00:09:49.309399 kernel: iscsi: registered transport (tcp) May 16 00:09:49.330644 kernel: iscsi: registered transport (qla4xxx) May 16 00:09:49.330729 kernel: QLogic iSCSI HBA Driver May 16 00:09:49.379604 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:09:49.386550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:09:49.433498 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:09:49.433599 kernel: device-mapper: uevent: version 1.0.3 May 16 00:09:49.433621 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:09:49.475410 kernel: raid6: avx2x4 gen() 29910 MB/s May 16 00:09:49.492397 kernel: raid6: avx2x2 gen() 31252 MB/s May 16 00:09:49.509635 kernel: raid6: avx2x1 gen() 26296 MB/s May 16 00:09:49.509688 kernel: raid6: using algorithm avx2x2 gen() 31252 MB/s May 16 00:09:49.529392 kernel: raid6: .... xor() 18590 MB/s, rmw enabled May 16 00:09:49.529460 kernel: raid6: using avx2x2 recovery algorithm May 16 00:09:49.550408 kernel: xor: automatically using best checksumming function avx May 16 00:09:49.740378 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:09:49.754832 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:09:49.762679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:09:49.774828 systemd-udevd[407]: Using default interface naming scheme 'v255'. May 16 00:09:49.778612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:09:49.787614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:09:49.807124 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation May 16 00:09:49.846685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:09:49.854593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:09:49.908340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:09:49.918712 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:09:49.936946 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:09:49.939423 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:09:49.940068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:09:49.943604 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:09:49.950783 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:09:49.972830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:09:50.044461 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:09:50.049365 kernel: scsi host0: Virtio SCSI HBA May 16 00:09:50.065396 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:09:50.068241 kernel: AES CTR mode by8 optimization enabled May 16 00:09:50.068291 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 16 00:09:50.069359 kernel: libata version 3.00 loaded. May 16 00:09:50.082629 kernel: ACPI: bus type USB registered May 16 00:09:50.082668 kernel: usbcore: registered new interface driver usbfs May 16 00:09:50.087962 kernel: usbcore: registered new interface driver hub May 16 00:09:50.088007 kernel: usbcore: registered new device driver usb May 16 00:09:50.091867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:09:50.091984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:09:50.095187 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:09:50.095710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:09:50.096475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:50.096974 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:09:50.110619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:09:50.117384 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 16 00:09:50.122592 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 16 00:09:50.122790 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 16 00:09:50.130316 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 16 00:09:50.130502 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 16 00:09:50.130599 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 16 00:09:50.130687 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:09:50.130848 kernel: hub 1-0:1.0: USB hub found May 16 00:09:50.130968 kernel: hub 1-0:1.0: 4 ports detected May 16 00:09:50.131065 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 16 00:09:50.131163 kernel: hub 2-0:1.0: USB hub found May 16 00:09:50.131262 kernel: hub 2-0:1.0: 4 ports detected May 16 00:09:50.138391 kernel: sd 0:0:0:0: Power-on or device reset occurred May 16 00:09:50.138955 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 16 00:09:50.139151 kernel: sd 0:0:0:0: [sda] Write Protect is off May 16 00:09:50.139457 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 16 00:09:50.140047 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:09:50.140060 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 16 00:09:50.140156 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:09:50.140551 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:09:50.142513 kernel: scsi host1: ahci May 16 00:09:50.142639 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:09:50.142655 kernel: GPT:17805311 != 80003071 May 16 00:09:50.142663 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:09:50.142672 kernel: GPT:17805311 != 80003071 May 16 00:09:50.142680 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:09:50.142690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:09:50.142698 kernel: scsi host2: ahci May 16 00:09:50.142793 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 16 00:09:50.143447 kernel: scsi host3: ahci May 16 00:09:50.144363 kernel: scsi host4: ahci May 16 00:09:50.144532 kernel: scsi host5: ahci May 16 00:09:50.144623 kernel: scsi host6: ahci May 16 00:09:50.144707 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 May 16 00:09:50.144717 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 May 16 00:09:50.144725 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 May 16 00:09:50.144734 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 May 16 00:09:50.144746 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 May 16 00:09:50.144754 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 May 16 00:09:50.204661 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 16 00:09:50.232162 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (466) May 16 00:09:50.236354 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (461) May 16 00:09:50.237939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 16 00:09:50.239513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:50.246150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 16 00:09:50.251643 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 16 00:09:50.252363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 16 00:09:50.266573 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:09:50.270427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:09:50.275463 disk-uuid[571]: Primary Header is updated. May 16 00:09:50.275463 disk-uuid[571]: Secondary Entries is updated. May 16 00:09:50.275463 disk-uuid[571]: Secondary Header is updated. May 16 00:09:50.283364 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:09:50.295071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:09:50.375447 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 16 00:09:50.459943 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:09:50.460072 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 16 00:09:50.466342 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:09:50.466454 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:09:50.469434 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:09:50.469494 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:09:50.473388 kernel: ata1.00: applying bridge limits May 16 00:09:50.476894 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:09:50.477411 kernel: ata1.00: configured for UDMA/100 May 16 00:09:50.483432 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:09:50.524447 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:09:50.533728 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:09:50.534415 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:09:50.536640 kernel: usbcore: registered new interface driver usbhid May 16 00:09:50.536686 kernel: usbhid: USB HID core driver May 16 00:09:50.544402 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 16 00:09:50.544486 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 16 00:09:50.549457 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 16 00:09:51.298621 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 16 00:09:51.301893 disk-uuid[572]: The operation has completed successfully. May 16 00:09:51.378572 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:09:51.378699 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:09:51.401687 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:09:51.405657 sh[601]: Success May 16 00:09:51.426165 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:09:51.490657 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:09:51.501517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:09:51.503004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:09:51.536917 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 16 00:09:51.536997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:09:51.540392 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:09:51.543923 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:09:51.546621 kernel: BTRFS info (device dm-0): using free space tree May 16 00:09:51.561432 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 16 00:09:51.565467 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:09:51.567331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:09:51.573618 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:09:51.579572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:09:51.601508 kernel: BTRFS info (device sda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:09:51.601591 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:09:51.601613 kernel: BTRFS info (device sda6): using free space tree May 16 00:09:51.611379 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:09:51.611463 kernel: BTRFS info (device sda6): auto enabling async discard May 16 00:09:51.625303 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:09:51.628166 kernel: BTRFS info (device sda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:09:51.633689 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:09:51.641846 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:09:51.693320 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:09:51.710491 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:09:51.725734 ignition[729]: Ignition 2.20.0 May 16 00:09:51.727353 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:09:51.725744 ignition[729]: Stage: fetch-offline May 16 00:09:51.725776 ignition[729]: no configs at "/usr/lib/ignition/base.d" May 16 00:09:51.725782 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:51.725868 ignition[729]: parsed url from cmdline: "" May 16 00:09:51.725871 ignition[729]: no config URL provided May 16 00:09:51.725875 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:09:51.725880 ignition[729]: no config at "/usr/lib/ignition/user.ign" May 16 00:09:51.725885 ignition[729]: failed to fetch config: resource requires networking May 16 00:09:51.726031 ignition[729]: Ignition finished successfully May 16 00:09:51.735558 systemd-networkd[785]: lo: Link UP May 16 00:09:51.735569 systemd-networkd[785]: lo: Gained carrier May 16 00:09:51.737225 systemd-networkd[785]: Enumeration completed May 16 00:09:51.737417 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:09:51.737938 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:51.737941 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:09:51.738973 systemd[1]: Reached target network.target - Network. May 16 00:09:51.740535 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:51.740539 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:09:51.741152 systemd-networkd[785]: eth0: Link UP May 16 00:09:51.741155 systemd-networkd[785]: eth0: Gained carrier May 16 00:09:51.741160 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:51.746781 systemd-networkd[785]: eth1: Link UP May 16 00:09:51.746785 systemd-networkd[785]: eth1: Gained carrier May 16 00:09:51.746798 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:51.747136 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 00:09:51.759001 ignition[791]: Ignition 2.20.0 May 16 00:09:51.759014 ignition[791]: Stage: fetch May 16 00:09:51.759171 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 16 00:09:51.759181 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:51.759258 ignition[791]: parsed url from cmdline: "" May 16 00:09:51.759261 ignition[791]: no config URL provided May 16 00:09:51.759266 ignition[791]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:09:51.759272 ignition[791]: no config at "/usr/lib/ignition/user.ign" May 16 00:09:51.759292 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 16 00:09:51.759442 ignition[791]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 16 00:09:51.780443 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:09:51.815428 systemd-networkd[785]: eth0: DHCPv4 address 65.108.51.217/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 16 00:09:51.960216 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 16 00:09:51.965780 ignition[791]: GET result: OK May 16 00:09:51.965946 ignition[791]: parsing config with SHA512: 471d20997a7745a60b53b02400e80552413af89cbd0e03163b9ee7dd4c907ea2d4060c92c0b79ccf345134a62043f1971e5813bc959c10ee886a2fd4cd0ab081 May 16 00:09:51.973400 unknown[791]: fetched base config from "system" May 16 00:09:51.973428 unknown[791]: fetched base config from "system" May 16 00:09:51.974194 ignition[791]: fetch: fetch complete May 16 00:09:51.973440 unknown[791]: fetched user config from "hetzner" May 16 00:09:51.974207 ignition[791]: fetch: fetch passed May 16 00:09:51.974300 ignition[791]: Ignition finished successfully May 16 00:09:51.979082 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 00:09:51.986679 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:09:52.012519 ignition[798]: Ignition 2.20.0 May 16 00:09:52.012539 ignition[798]: Stage: kargs May 16 00:09:52.012863 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 16 00:09:52.016074 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:09:52.012880 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:52.014379 ignition[798]: kargs: kargs passed May 16 00:09:52.014449 ignition[798]: Ignition finished successfully May 16 00:09:52.027670 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:09:52.045699 ignition[805]: Ignition 2.20.0 May 16 00:09:52.045719 ignition[805]: Stage: disks May 16 00:09:52.046025 ignition[805]: no configs at "/usr/lib/ignition/base.d" May 16 00:09:52.048997 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:09:52.046040 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:52.057439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:09:52.047464 ignition[805]: disks: disks passed May 16 00:09:52.058609 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:09:52.047534 ignition[805]: Ignition finished successfully May 16 00:09:52.059766 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:09:52.061988 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:09:52.064219 systemd[1]: Reached target basic.target - Basic System. May 16 00:09:52.073636 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:09:52.094954 systemd-fsck[813]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 00:09:52.098007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:09:52.105475 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:09:52.203430 kernel: EXT4-fs (sda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 16 00:09:52.204427 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:09:52.205287 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:09:52.215467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:09:52.218432 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:09:52.223501 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 16 00:09:52.225607 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:09:52.225632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:09:52.229076 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:09:52.234381 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (821) May 16 00:09:52.238902 kernel: BTRFS info (device sda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:09:52.238924 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:09:52.238933 kernel: BTRFS info (device sda6): using free space tree May 16 00:09:52.239727 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:09:52.243377 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:09:52.243403 kernel: BTRFS info (device sda6): auto enabling async discard May 16 00:09:52.251911 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:09:52.308083 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:09:52.312613 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory May 16 00:09:52.317190 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:09:52.319766 coreos-metadata[823]: May 16 00:09:52.319 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 16 00:09:52.322424 coreos-metadata[823]: May 16 00:09:52.321 INFO Fetch successful May 16 00:09:52.323408 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:09:52.324192 coreos-metadata[823]: May 16 00:09:52.322 INFO wrote hostname ci-4152-2-3-n-e053cdada0 to /sysroot/etc/hostname May 16 00:09:52.323893 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:09:52.421447 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:09:52.427538 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:09:52.430536 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:09:52.443384 kernel: BTRFS info (device sda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:09:52.471084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:09:52.473566 ignition[941]: INFO : Ignition 2.20.0 May 16 00:09:52.473566 ignition[941]: INFO : Stage: mount May 16 00:09:52.473566 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:09:52.473566 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:52.473566 ignition[941]: INFO : mount: mount passed May 16 00:09:52.473566 ignition[941]: INFO : Ignition finished successfully May 16 00:09:52.473246 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:09:52.480582 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:09:52.532492 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:09:52.538712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:09:52.570403 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (954) May 16 00:09:52.576223 kernel: BTRFS info (device sda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:09:52.576309 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:09:52.580948 kernel: BTRFS info (device sda6): using free space tree May 16 00:09:52.593438 kernel: BTRFS info (device sda6): enabling ssd optimizations May 16 00:09:52.593540 kernel: BTRFS info (device sda6): auto enabling async discard May 16 00:09:52.599714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:09:52.636372 ignition[971]: INFO : Ignition 2.20.0 May 16 00:09:52.636372 ignition[971]: INFO : Stage: files May 16 00:09:52.637478 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:09:52.637478 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:52.639016 ignition[971]: DEBUG : files: compiled without relabeling support, skipping May 16 00:09:52.639663 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:09:52.639663 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:09:52.643052 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:09:52.644749 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:09:52.644749 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:09:52.643673 unknown[971]: wrote ssh authorized keys file for user: core May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:09:52.649205 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 00:09:52.912614 systemd-networkd[785]: eth1: Gained IPv6LL May 16 00:09:53.204320 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 16 00:09:53.416339 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:09:53.416339 ignition[971]: INFO : files: op(8): [started] processing unit "containerd.service" May 16 00:09:53.420536 ignition[971]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:09:53.420536 ignition[971]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:09:53.420536 ignition[971]: INFO : files: op(8): [finished] processing unit "containerd.service" May 16 00:09:53.420536 ignition[971]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" May 16 00:09:53.420536 ignition[971]: INFO : files: op(a): op(b): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 16 00:09:53.420536 ignition[971]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 16 00:09:53.420536 ignition[971]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" May 16 00:09:53.420536 ignition[971]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:09:53.420536 ignition[971]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:09:53.420536 ignition[971]: INFO : files: files passed May 16 00:09:53.420536 ignition[971]: INFO : Ignition finished successfully May 16 00:09:53.419182 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:09:53.428461 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:09:53.444558 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:09:53.447140 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:09:53.458876 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:09:53.458876 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:09:53.448609 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:09:53.463076 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:09:53.461080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:09:53.462868 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:09:53.471605 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:09:53.494563 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:09:53.494692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:09:53.497034 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:09:53.498394 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:09:53.500039 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:09:53.506582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:09:53.520149 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:09:53.526515 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:09:53.540515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:09:53.542446 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:09:53.544432 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:09:53.545221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:09:53.545377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:09:53.547182 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:09:53.548251 systemd[1]: Stopped target basic.target - Basic System. May 16 00:09:53.549855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:09:53.551290 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:09:53.552739 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:09:53.554424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:09:53.556080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:09:53.557768 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:09:53.559376 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:09:53.560996 systemd[1]: Stopped target swap.target - Swaps. May 16 00:09:53.562475 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:09:53.562600 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:09:53.564381 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:09:53.565477 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:09:53.566912 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:09:53.567398 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:09:53.568600 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:09:53.568725 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:09:53.571091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:09:53.571230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:09:53.572258 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:09:53.572443 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:09:53.573634 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 16 00:09:53.573755 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 16 00:09:53.585841 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:09:53.587262 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:09:53.587474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:09:53.592779 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:09:53.594227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:09:53.595479 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:09:53.597321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:09:53.598235 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:09:53.603110 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:09:53.605726 ignition[1024]: INFO : Ignition 2.20.0 May 16 00:09:53.605726 ignition[1024]: INFO : Stage: umount May 16 00:09:53.615133 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:09:53.615133 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 16 00:09:53.615133 ignition[1024]: INFO : umount: umount passed May 16 00:09:53.615133 ignition[1024]: INFO : Ignition finished successfully May 16 00:09:53.613190 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:09:53.614879 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:09:53.614970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:09:53.622637 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:09:53.622707 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:09:53.623588 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:09:53.623647 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:09:53.626457 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 00:09:53.626509 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 00:09:53.629559 systemd[1]: Stopped target network.target - Network. May 16 00:09:53.630761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:09:53.630829 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:09:53.631589 systemd[1]: Stopped target paths.target - Path Units. May 16 00:09:53.632202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:09:53.634042 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:09:53.636494 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:09:53.637689 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:09:53.641773 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:09:53.641824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:09:53.642382 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:09:53.642424 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:09:53.655188 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:09:53.655255 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:09:53.656590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:09:53.656636 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:09:53.661188 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:09:53.665546 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:09:53.667104 systemd-networkd[785]: eth0: DHCPv6 lease lost May 16 00:09:53.668723 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:09:53.669888 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:09:53.670014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:09:53.671443 systemd-networkd[785]: eth1: DHCPv6 lease lost May 16 00:09:53.672072 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:09:53.672146 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:09:53.675050 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:09:53.675201 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:09:53.676601 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:09:53.676659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:09:53.685513 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:09:53.686606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:09:53.686668 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:09:53.687520 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:09:53.689588 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:09:53.690082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:09:53.699969 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:09:53.700027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:09:53.701664 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:09:53.701700 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:09:53.703740 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:09:53.703799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:09:53.705141 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:09:53.705265 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:09:53.706298 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:09:53.706387 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:09:53.707757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:09:53.707815 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:09:53.708777 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:09:53.708801 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:09:53.709886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:09:53.709919 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:09:53.711288 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:09:53.711322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:09:53.712252 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:09:53.712285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:09:53.720617 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:09:53.721107 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:09:53.721143 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:09:53.721645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:09:53.721675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:53.725578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:09:53.725688 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:09:53.727272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:09:53.732457 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:09:53.738489 systemd[1]: Switching root. May 16 00:09:53.786539 systemd-journald[188]: Journal stopped May 16 00:09:55.013292 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). May 16 00:09:55.013366 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:09:55.013379 kernel: SELinux: policy capability open_perms=1 May 16 00:09:55.013389 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:09:55.013398 kernel: SELinux: policy capability always_check_network=0 May 16 00:09:55.013409 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:09:55.013418 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:09:55.013427 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:09:55.013437 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:09:55.013446 kernel: audit: type=1403 audit(1747354194.031:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:09:55.013456 systemd[1]: Successfully loaded SELinux policy in 68.329ms. May 16 00:09:55.013472 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.358ms. May 16 00:09:55.013483 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:09:55.013493 systemd[1]: Detected virtualization kvm. May 16 00:09:55.013503 systemd[1]: Detected architecture x86-64. May 16 00:09:55.013513 systemd[1]: Detected first boot. May 16 00:09:55.013523 systemd[1]: Hostname set to . May 16 00:09:55.013533 systemd[1]: Initializing machine ID from VM UUID. May 16 00:09:55.013543 zram_generator::config[1084]: No configuration found. May 16 00:09:55.013553 systemd[1]: Populated /etc with preset unit settings. May 16 00:09:55.013563 systemd[1]: Queued start job for default target multi-user.target. May 16 00:09:55.013573 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 16 00:09:55.013583 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:09:55.013592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:09:55.013606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:09:55.013615 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:09:55.013625 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:09:55.013635 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:09:55.013645 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:09:55.013655 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:09:55.013665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:09:55.013675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:09:55.013684 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:09:55.013696 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:09:55.013707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:09:55.013720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:09:55.013730 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:09:55.013740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:09:55.013750 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:09:55.013761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:09:55.013773 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:09:55.013787 systemd[1]: Reached target slices.target - Slice Units. May 16 00:09:55.013797 systemd[1]: Reached target swap.target - Swaps. May 16 00:09:55.013817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:09:55.013827 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:09:55.013837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:09:55.013847 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:09:55.013856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:09:55.013866 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:09:55.013878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:09:55.013888 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:09:55.013898 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:09:55.013907 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:09:55.013917 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:09:55.013927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:55.013938 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:09:55.013948 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:09:55.013957 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:09:55.013968 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:09:55.013978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:09:55.013988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:09:55.013999 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:09:55.014012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:09:55.014022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:09:55.014032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:09:55.014042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:09:55.014056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:09:55.014065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:09:55.014075 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 16 00:09:55.014086 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 16 00:09:55.014096 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:09:55.014107 kernel: fuse: init (API version 7.39) May 16 00:09:55.014118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:09:55.014129 kernel: loop: module loaded May 16 00:09:55.014138 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:09:55.014161 systemd-journald[1186]: Collecting audit messages is disabled. May 16 00:09:55.014182 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:09:55.014192 systemd-journald[1186]: Journal started May 16 00:09:55.014214 systemd-journald[1186]: Runtime Journal (/run/log/journal/48bbcab454384b5f8e3810587bd08ffc) is 4.8M, max 38.4M, 33.6M free. May 16 00:09:55.030070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:09:55.030147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:55.041203 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:09:55.038884 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:09:55.039400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:09:55.039943 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:09:55.040748 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:09:55.041592 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:09:55.042442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:09:55.044612 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:09:55.045285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:09:55.051155 kernel: ACPI: bus type drm_connector registered May 16 00:09:55.046554 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:09:55.046678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:09:55.048557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:09:55.048673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:09:55.049566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:09:55.049681 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:09:55.053607 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:09:55.053722 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:09:55.054389 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:09:55.054507 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:09:55.055205 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:09:55.055518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:09:55.056200 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:09:55.056930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:09:55.057646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:09:55.067001 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:09:55.076415 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:09:55.078160 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:09:55.078735 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:09:55.088595 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:09:55.093399 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:09:55.095523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:09:55.103202 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:09:55.106322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:09:55.113462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:09:55.115732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:09:55.121109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:09:55.123513 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:09:55.137473 systemd-journald[1186]: Time spent on flushing to /var/log/journal/48bbcab454384b5f8e3810587bd08ffc is 36.821ms for 1101 entries. May 16 00:09:55.137473 systemd-journald[1186]: System Journal (/var/log/journal/48bbcab454384b5f8e3810587bd08ffc) is 8.0M, max 584.8M, 576.8M free. May 16 00:09:55.191544 systemd-journald[1186]: Received client request to flush runtime journal. May 16 00:09:55.145606 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:09:55.146323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:09:55.156675 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. May 16 00:09:55.156687 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. May 16 00:09:55.162027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:09:55.172589 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:09:55.174205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:09:55.177612 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:09:55.189507 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:09:55.198492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:09:55.201498 udevadm[1239]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:09:55.223018 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:09:55.239516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:09:55.250918 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 16 00:09:55.251156 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 16 00:09:55.255555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:09:55.748765 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:09:55.766539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:09:55.785070 systemd-udevd[1255]: Using default interface naming scheme 'v255'. May 16 00:09:55.811846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:09:55.826688 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:09:55.855481 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:09:55.874895 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 16 00:09:55.923285 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:09:55.946391 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 00:09:55.982879 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 16 00:09:55.982897 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. May 16 00:09:55.986844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:55.987672 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:09:55.987321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:09:55.993995 kernel: ACPI: button: Power Button [PWRF] May 16 00:09:55.993524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:09:55.994413 systemd-networkd[1265]: lo: Link UP May 16 00:09:55.995480 systemd-networkd[1265]: lo: Gained carrier May 16 00:09:55.998467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:09:56.002085 systemd-networkd[1265]: Enumeration completed May 16 00:09:56.007382 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1260) May 16 00:09:56.005271 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:56.007628 systemd-networkd[1265]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:09:56.010658 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:56.010747 systemd-networkd[1265]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:09:56.011806 systemd-networkd[1265]: eth0: Link UP May 16 00:09:56.012214 systemd-networkd[1265]: eth0: Gained carrier May 16 00:09:56.012268 systemd-networkd[1265]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:56.013053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:09:56.013592 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:09:56.013628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:09:56.013675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:56.013862 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:09:56.014743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:09:56.015446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:09:56.020156 systemd-networkd[1265]: eth1: Link UP May 16 00:09:56.020160 systemd-networkd[1265]: eth1: Gained carrier May 16 00:09:56.020178 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:56.034587 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:09:56.035660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:09:56.035832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:09:56.039851 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:09:56.042822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:09:56.046445 systemd-networkd[1265]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:09:56.069445 systemd-networkd[1265]: eth0: DHCPv4 address 65.108.51.217/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 16 00:09:56.076152 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:09:56.076400 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:09:56.076518 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:09:56.080386 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 16 00:09:56.084314 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 16 00:09:56.086655 kernel: Console: switching to colour dummy device 80x25 May 16 00:09:56.086679 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 16 00:09:56.086690 kernel: [drm] features: -context_init May 16 00:09:56.087702 kernel: [drm] number of scanouts: 1 May 16 00:09:56.088376 kernel: [drm] number of cap sets: 0 May 16 00:09:56.096357 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 16 00:09:56.099379 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 16 00:09:56.108895 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 16 00:09:56.108941 kernel: Console: switching to colour frame buffer device 160x50 May 16 00:09:56.116426 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 16 00:09:56.130360 kernel: EDAC MC: Ver: 3.0.0 May 16 00:09:56.141651 systemd-networkd[1265]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:09:56.145972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 16 00:09:56.151149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:09:56.151311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:09:56.157650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:09:56.172677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:09:56.172952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:56.178611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:09:56.241012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:09:56.296973 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:09:56.309723 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:09:56.326394 lvm[1323]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:09:56.369181 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:09:56.371128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:09:56.378615 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:09:56.387760 lvm[1326]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:09:56.420177 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:09:56.423151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:09:56.424800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:09:56.424866 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:09:56.424985 systemd[1]: Reached target machines.target - Containers. May 16 00:09:56.427441 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 00:09:56.433538 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:09:56.436592 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:09:56.438462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:09:56.441561 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:09:56.455005 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 00:09:56.461560 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:09:56.464702 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:09:56.481442 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:09:56.491599 kernel: loop0: detected capacity change from 0 to 138184 May 16 00:09:56.505760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:09:56.510155 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 00:09:56.540473 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:09:56.561429 kernel: loop1: detected capacity change from 0 to 8 May 16 00:09:56.581484 kernel: loop2: detected capacity change from 0 to 140992 May 16 00:09:56.624647 kernel: loop3: detected capacity change from 0 to 221472 May 16 00:09:56.681402 kernel: loop4: detected capacity change from 0 to 138184 May 16 00:09:56.706711 kernel: loop5: detected capacity change from 0 to 8 May 16 00:09:56.712278 kernel: loop6: detected capacity change from 0 to 140992 May 16 00:09:56.738107 kernel: loop7: detected capacity change from 0 to 221472 May 16 00:09:56.757699 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 16 00:09:56.758265 (sd-merge)[1348]: Merged extensions into '/usr'. May 16 00:09:56.766757 systemd[1]: Reloading requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:09:56.766777 systemd[1]: Reloading... May 16 00:09:56.840430 zram_generator::config[1376]: No configuration found. May 16 00:09:56.910449 ldconfig[1330]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:09:56.966176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:09:57.030965 systemd[1]: Reloading finished in 263 ms. May 16 00:09:57.047963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:09:57.052799 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:09:57.067580 systemd[1]: Starting ensure-sysext.service... May 16 00:09:57.072432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:09:57.082502 systemd[1]: Reloading requested from client PID 1426 ('systemctl') (unit ensure-sysext.service)... May 16 00:09:57.082889 systemd[1]: Reloading... May 16 00:09:57.098709 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:09:57.099010 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:09:57.100277 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:09:57.100708 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. May 16 00:09:57.100860 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. May 16 00:09:57.104097 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:09:57.104196 systemd-tmpfiles[1427]: Skipping /boot May 16 00:09:57.111310 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:09:57.111434 systemd-tmpfiles[1427]: Skipping /boot May 16 00:09:57.136576 systemd-networkd[1265]: eth1: Gained IPv6LL May 16 00:09:57.160553 zram_generator::config[1457]: No configuration found. May 16 00:09:57.272899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:09:57.345734 systemd[1]: Reloading finished in 262 ms. May 16 00:09:57.360679 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:09:57.361872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:09:57.376513 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:09:57.389462 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:09:57.397219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:09:57.411487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:09:57.418621 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:09:57.429201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.430783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:09:57.436780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:09:57.447441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:09:57.450779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:09:57.451415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:09:57.453662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.457610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:09:57.457762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:09:57.460712 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:09:57.460865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:09:57.464010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:09:57.469556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.469775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:09:57.480621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:09:57.489448 augenrules[1544]: No rules May 16 00:09:57.495094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:09:57.497964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:09:57.498163 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.508680 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:09:57.508947 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:09:57.514672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:09:57.516896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:09:57.519880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:09:57.520962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:09:57.521116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:09:57.524590 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:09:57.524753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:09:57.538943 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:09:57.546122 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:09:57.548168 systemd-resolved[1517]: Positive Trust Anchors: May 16 00:09:57.548186 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:09:57.548217 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:09:57.554681 systemd[1]: Finished ensure-sysext.service. May 16 00:09:57.559017 systemd-resolved[1517]: Using system hostname 'ci-4152-2-3-n-e053cdada0'. May 16 00:09:57.560965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.572496 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:09:57.573172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:09:57.578527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:09:57.587484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:09:57.592539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:09:57.601840 augenrules[1564]: /sbin/augenrules: No change May 16 00:09:57.605507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:09:57.606313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:09:57.613508 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:09:57.618577 augenrules[1587]: No rules May 16 00:09:57.625585 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:09:57.626237 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:09:57.626275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:09:57.626526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:09:57.629502 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:09:57.629746 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:09:57.635429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:09:57.635670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:09:57.636962 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:09:57.637089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:09:57.639756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:09:57.639967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:09:57.641804 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:09:57.641976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:09:57.644929 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:09:57.649718 systemd-networkd[1265]: eth0: Gained IPv6LL May 16 00:09:57.654506 systemd[1]: Reached target network.target - Network. May 16 00:09:57.656169 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:09:57.659275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:09:57.659915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:09:57.659968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:09:57.703889 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:09:57.705804 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:09:57.706273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:09:57.706659 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:09:57.707908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:09:57.710504 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:09:57.710643 systemd[1]: Reached target paths.target - Path Units. May 16 00:09:57.712462 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:09:57.714496 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:09:57.716399 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:09:57.718174 systemd[1]: Reached target timers.target - Timer Units. May 16 00:09:57.721391 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:09:57.725510 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:09:57.732884 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:09:57.734232 systemd-timesyncd[1589]: Contacted time server 167.235.69.67:123 (0.flatcar.pool.ntp.org). May 16 00:09:57.734295 systemd-timesyncd[1589]: Initial clock synchronization to Fri 2025-05-16 00:09:57.732150 UTC. May 16 00:09:57.736485 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:09:57.738871 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:09:57.739308 systemd[1]: Reached target basic.target - Basic System. May 16 00:09:57.739881 systemd[1]: System is tainted: cgroupsv1 May 16 00:09:57.739918 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:09:57.739937 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:09:57.742863 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:09:57.750754 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 00:09:57.758486 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:09:57.771413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:09:57.776725 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:09:57.779709 jq[1614]: false May 16 00:09:57.783276 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:09:57.789199 coreos-metadata[1610]: May 16 00:09:57.789 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 16 00:09:57.792163 coreos-metadata[1610]: May 16 00:09:57.790 INFO Fetch successful May 16 00:09:57.792163 coreos-metadata[1610]: May 16 00:09:57.790 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 16 00:09:57.792163 coreos-metadata[1610]: May 16 00:09:57.791 INFO Fetch successful May 16 00:09:57.790326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:09:57.799664 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:09:57.812467 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:09:57.817687 dbus-daemon[1611]: [system] SELinux support is enabled May 16 00:09:57.822850 extend-filesystems[1615]: Found loop4 May 16 00:09:57.827944 extend-filesystems[1615]: Found loop5 May 16 00:09:57.827944 extend-filesystems[1615]: Found loop6 May 16 00:09:57.827944 extend-filesystems[1615]: Found loop7 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda May 16 00:09:57.827944 extend-filesystems[1615]: Found sda1 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda2 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda3 May 16 00:09:57.827944 extend-filesystems[1615]: Found usr May 16 00:09:57.827944 extend-filesystems[1615]: Found sda4 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda6 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda7 May 16 00:09:57.827944 extend-filesystems[1615]: Found sda9 May 16 00:09:57.827944 extend-filesystems[1615]: Checking size of /dev/sda9 May 16 00:09:57.824467 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 16 00:09:57.831175 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:09:57.856620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:09:57.865427 extend-filesystems[1615]: Resized partition /dev/sda9 May 16 00:09:57.874552 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:09:57.875933 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:09:57.887367 extend-filesystems[1647]: resize2fs 1.47.1 (20-May-2024) May 16 00:09:57.885590 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:09:57.900378 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 16 00:09:57.905567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:09:57.908331 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1264) May 16 00:09:57.914157 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:09:57.931954 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:09:57.936656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:09:57.937557 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:09:57.937776 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:09:57.948310 jq[1652]: true May 16 00:09:57.946444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:09:57.947279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:09:57.954572 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:09:57.974928 jq[1660]: true May 16 00:09:57.981846 update_engine[1650]: I20250516 00:09:57.979905 1650 main.cc:92] Flatcar Update Engine starting May 16 00:09:57.995807 (ntainerd)[1661]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:09:58.008612 update_engine[1650]: I20250516 00:09:58.006649 1650 update_check_scheduler.cc:74] Next update check in 3m35s May 16 00:09:58.024177 systemd[1]: Started update-engine.service - Update Engine. May 16 00:09:58.042972 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:09:58.043256 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:09:58.044871 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:09:58.044885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:09:58.049740 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:09:58.050762 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:09:58.066057 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 00:09:58.069171 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:09:58.086541 systemd-logind[1645]: New seat seat0. May 16 00:09:58.097037 systemd-logind[1645]: Watching system buttons on /dev/input/event2 (Power Button) May 16 00:09:58.097055 systemd-logind[1645]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:09:58.097273 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:09:58.183972 sshd_keygen[1653]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:09:58.185642 bash[1699]: Updated "/home/core/.ssh/authorized_keys" May 16 00:09:58.186286 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:09:58.196807 systemd[1]: Starting sshkeys.service... May 16 00:09:58.219238 locksmithd[1690]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:09:58.220940 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:09:58.231538 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 16 00:09:58.232678 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:09:58.245020 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 00:09:58.255733 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 00:09:58.279769 coreos-metadata[1726]: May 16 00:09:58.276 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 16 00:09:58.279769 coreos-metadata[1726]: May 16 00:09:58.279 INFO Fetch successful May 16 00:09:58.257339 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:09:58.286107 extend-filesystems[1647]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 16 00:09:58.286107 extend-filesystems[1647]: old_desc_blocks = 1, new_desc_blocks = 5 May 16 00:09:58.286107 extend-filesystems[1647]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 16 00:09:58.260139 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:09:58.297802 extend-filesystems[1615]: Resized filesystem in /dev/sda9 May 16 00:09:58.297802 extend-filesystems[1615]: Found sr0 May 16 00:09:58.270099 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:09:58.290229 unknown[1726]: wrote ssh authorized keys file for user: core May 16 00:09:58.291561 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:09:58.291840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:09:58.326744 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:09:58.340746 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:09:58.346452 update-ssh-keys[1735]: Updated "/home/core/.ssh/authorized_keys" May 16 00:09:58.355872 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:09:58.357980 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:09:58.359133 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 00:09:58.367033 systemd[1]: Finished sshkeys.service. May 16 00:09:58.378362 containerd[1661]: time="2025-05-16T00:09:58.378128118Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:09:58.402737 containerd[1661]: time="2025-05-16T00:09:58.402508312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.404051 containerd[1661]: time="2025-05-16T00:09:58.404028146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:09:58.404125 containerd[1661]: time="2025-05-16T00:09:58.404114167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:09:58.404169 containerd[1661]: time="2025-05-16T00:09:58.404160709Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:09:58.404327 containerd[1661]: time="2025-05-16T00:09:58.404315079Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404387557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404447692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404459442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404643735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404655346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404666666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404675191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404734294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.404904743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.405023972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:09:58.405153 containerd[1661]: time="2025-05-16T00:09:58.405035421Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:09:58.405336 containerd[1661]: time="2025-05-16T00:09:58.405099614Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:09:58.405336 containerd[1661]: time="2025-05-16T00:09:58.405136810Z" level=info msg="metadata content store policy set" policy=shared May 16 00:09:58.410442 containerd[1661]: time="2025-05-16T00:09:58.410422335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:09:58.410541 containerd[1661]: time="2025-05-16T00:09:58.410530363Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:09:58.410590 containerd[1661]: time="2025-05-16T00:09:58.410580091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:09:58.410633 containerd[1661]: time="2025-05-16T00:09:58.410625050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:09:58.410671 containerd[1661]: time="2025-05-16T00:09:58.410663787Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:09:58.410822 containerd[1661]: time="2025-05-16T00:09:58.410809102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:09:58.412844 containerd[1661]: time="2025-05-16T00:09:58.412825827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:09:58.412998 containerd[1661]: time="2025-05-16T00:09:58.412984855Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:09:58.413052 containerd[1661]: time="2025-05-16T00:09:58.413043228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:09:58.413094 containerd[1661]: time="2025-05-16T00:09:58.413086173Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:09:58.413137 containerd[1661]: time="2025-05-16T00:09:58.413128137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413176 containerd[1661]: time="2025-05-16T00:09:58.413168468Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413217 containerd[1661]: time="2025-05-16T00:09:58.413209280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413265 containerd[1661]: time="2025-05-16T00:09:58.413255060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413307 containerd[1661]: time="2025-05-16T00:09:58.413299097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413365 containerd[1661]: time="2025-05-16T00:09:58.413336512Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413418 containerd[1661]: time="2025-05-16T00:09:58.413409039Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413461 containerd[1661]: time="2025-05-16T00:09:58.413452576Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:09:58.413507 containerd[1661]: time="2025-05-16T00:09:58.413498997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413556 containerd[1661]: time="2025-05-16T00:09:58.413546861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413601 containerd[1661]: time="2025-05-16T00:09:58.413593483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413647 containerd[1661]: time="2025-05-16T00:09:58.413638933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413686 containerd[1661]: time="2025-05-16T00:09:58.413678722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413733 containerd[1661]: time="2025-05-16T00:09:58.413724923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413783 containerd[1661]: time="2025-05-16T00:09:58.413773678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413827 containerd[1661]: time="2025-05-16T00:09:58.413818818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413866 containerd[1661]: time="2025-05-16T00:09:58.413858687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:09:58.413906 containerd[1661]: time="2025-05-16T00:09:58.413898397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.413935752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.413947603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.413959785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.413972036Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.413989998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414002550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414011836Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414051024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414066431Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414076399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414088170Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414097096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414109978Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:09:58.414465 containerd[1661]: time="2025-05-16T00:09:58.414123090Z" level=info msg="NRI interface is disabled by configuration." May 16 00:09:58.414841 containerd[1661]: time="2025-05-16T00:09:58.414134972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:09:58.414862 containerd[1661]: time="2025-05-16T00:09:58.414414322Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:09:58.414862 containerd[1661]: time="2025-05-16T00:09:58.414454873Z" level=info msg="Connect containerd service" May 16 00:09:58.414862 containerd[1661]: time="2025-05-16T00:09:58.414484404Z" level=info msg="using legacy CRI server" May 16 00:09:58.414862 containerd[1661]: time="2025-05-16T00:09:58.414490915Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:09:58.414862 containerd[1661]: time="2025-05-16T00:09:58.414590701Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:09:58.415797 containerd[1661]: time="2025-05-16T00:09:58.415489405Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:09:58.416572 containerd[1661]: time="2025-05-16T00:09:58.416291362Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:09:58.416572 containerd[1661]: time="2025-05-16T00:09:58.416335188Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:09:58.416572 containerd[1661]: time="2025-05-16T00:09:58.416413005Z" level=info msg="Start subscribing containerd event" May 16 00:09:58.416639 containerd[1661]: time="2025-05-16T00:09:58.416617183Z" level=info msg="Start recovering state" May 16 00:09:58.416693 containerd[1661]: time="2025-05-16T00:09:58.416671378Z" level=info msg="Start event monitor" May 16 00:09:58.416720 containerd[1661]: time="2025-05-16T00:09:58.416700288Z" level=info msg="Start snapshots syncer" May 16 00:09:58.416720 containerd[1661]: time="2025-05-16T00:09:58.416709454Z" level=info msg="Start cni network conf syncer for default" May 16 00:09:58.416720 containerd[1661]: time="2025-05-16T00:09:58.416717869Z" level=info msg="Start streaming server" May 16 00:09:58.416792 containerd[1661]: time="2025-05-16T00:09:58.416775831Z" level=info msg="containerd successfully booted in 0.039641s" May 16 00:09:58.418560 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:09:59.413538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:09:59.418140 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:09:59.422461 systemd[1]: Startup finished in 6.739s (kernel) + 5.458s (userspace) = 12.197s. May 16 00:09:59.427147 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:00.167960 kubelet[1758]: E0516 00:10:00.167826 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:00.171731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:00.172720 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:10:10.274190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:10:10.280978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:10:10.426573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:10:10.440866 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:10.505477 kubelet[1782]: E0516 00:10:10.505386 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:10.510925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:10.511221 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:10:20.523934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:10:20.530584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:10:20.686530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:10:20.689117 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:20.757503 kubelet[1803]: E0516 00:10:20.757446 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:20.761010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:20.761924 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:10:30.773971 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:10:30.781677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:10:30.906642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:10:30.910171 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:30.970128 kubelet[1823]: E0516 00:10:30.970053 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:30.973866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:30.974156 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:10:41.024093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 00:10:41.030580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:10:41.157476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:10:41.161529 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:41.192370 kubelet[1842]: E0516 00:10:41.192283 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:41.195403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:41.195544 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:10:43.271318 update_engine[1650]: I20250516 00:10:43.271159 1650 update_attempter.cc:509] Updating boot flags... May 16 00:10:43.338388 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1860) May 16 00:10:43.406988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1859) May 16 00:10:43.453447 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1859) May 16 00:10:51.273682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 16 00:10:51.282469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:10:51.424890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:10:51.439907 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:10:51.507206 kubelet[1884]: E0516 00:10:51.507116 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:10:51.511199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:10:51.511514 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:01.523671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 16 00:11:01.536619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:01.641527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:01.644603 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:01.696338 kubelet[1904]: E0516 00:11:01.696210 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:01.699275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:01.699642 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:11.774024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 16 00:11:11.780736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:11.944531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:11.947656 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:11.993889 kubelet[1925]: E0516 00:11:11.993791 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:11.995221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:11.995555 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:22.023903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 16 00:11:22.030572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:22.161491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:22.164320 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:22.202622 kubelet[1945]: E0516 00:11:22.202514 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:22.205411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:22.205584 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:32.273984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 16 00:11:32.280999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:32.429462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:32.432573 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:32.461436 kubelet[1962]: E0516 00:11:32.461374 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:32.464927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:32.465075 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:42.524046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 16 00:11:42.531075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:42.695429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:42.709954 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:42.778205 kubelet[1984]: E0516 00:11:42.778056 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:42.781875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:42.782885 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:48.491039 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:11:48.496741 systemd[1]: Started sshd@0-65.108.51.217:22-139.178.68.195:47380.service - OpenSSH per-connection server daemon (139.178.68.195:47380). May 16 00:11:49.507593 sshd[1993]: Accepted publickey for core from 139.178.68.195 port 47380 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:49.509772 sshd-session[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:49.528206 systemd-logind[1645]: New session 1 of user core. May 16 00:11:49.528969 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:11:49.536778 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:11:49.566564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:11:49.579866 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:11:49.588904 (systemd)[1999]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:11:49.730198 systemd[1999]: Queued start job for default target default.target. May 16 00:11:49.730605 systemd[1999]: Created slice app.slice - User Application Slice. May 16 00:11:49.730631 systemd[1999]: Reached target paths.target - Paths. May 16 00:11:49.730645 systemd[1999]: Reached target timers.target - Timers. May 16 00:11:49.737452 systemd[1999]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:11:49.742582 systemd[1999]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:11:49.742641 systemd[1999]: Reached target sockets.target - Sockets. May 16 00:11:49.742657 systemd[1999]: Reached target basic.target - Basic System. May 16 00:11:49.742694 systemd[1999]: Reached target default.target - Main User Target. May 16 00:11:49.742725 systemd[1999]: Startup finished in 144ms. May 16 00:11:49.743012 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:11:49.745667 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:11:50.431636 systemd[1]: Started sshd@1-65.108.51.217:22-139.178.68.195:47386.service - OpenSSH per-connection server daemon (139.178.68.195:47386). May 16 00:11:51.412923 sshd[2011]: Accepted publickey for core from 139.178.68.195 port 47386 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:51.414966 sshd-session[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:51.422844 systemd-logind[1645]: New session 2 of user core. May 16 00:11:51.432904 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:11:52.087329 sshd[2014]: Connection closed by 139.178.68.195 port 47386 May 16 00:11:52.088182 sshd-session[2011]: pam_unix(sshd:session): session closed for user core May 16 00:11:52.093523 systemd[1]: sshd@1-65.108.51.217:22-139.178.68.195:47386.service: Deactivated successfully. May 16 00:11:52.094446 systemd-logind[1645]: Session 2 logged out. Waiting for processes to exit. May 16 00:11:52.096838 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:11:52.098821 systemd-logind[1645]: Removed session 2. May 16 00:11:52.254780 systemd[1]: Started sshd@2-65.108.51.217:22-139.178.68.195:47394.service - OpenSSH per-connection server daemon (139.178.68.195:47394). May 16 00:11:53.023900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 16 00:11:53.034924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:11:53.189476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:11:53.192218 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:11:53.234488 kubelet[2033]: E0516 00:11:53.234408 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:11:53.237265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:11:53.237508 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:11:53.250273 sshd[2019]: Accepted publickey for core from 139.178.68.195 port 47394 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:53.251863 sshd-session[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:53.257801 systemd-logind[1645]: New session 3 of user core. May 16 00:11:53.264706 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:11:53.918942 sshd[2042]: Connection closed by 139.178.68.195 port 47394 May 16 00:11:53.919782 sshd-session[2019]: pam_unix(sshd:session): session closed for user core May 16 00:11:53.924168 systemd[1]: sshd@2-65.108.51.217:22-139.178.68.195:47394.service: Deactivated successfully. May 16 00:11:53.925760 systemd-logind[1645]: Session 3 logged out. Waiting for processes to exit. May 16 00:11:53.926416 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:11:53.927441 systemd-logind[1645]: Removed session 3. May 16 00:11:54.084070 systemd[1]: Started sshd@3-65.108.51.217:22-139.178.68.195:56242.service - OpenSSH per-connection server daemon (139.178.68.195:56242). May 16 00:11:55.066099 sshd[2047]: Accepted publickey for core from 139.178.68.195 port 56242 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:55.068451 sshd-session[2047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:55.076563 systemd-logind[1645]: New session 4 of user core. May 16 00:11:55.085051 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:11:55.746690 sshd[2050]: Connection closed by 139.178.68.195 port 56242 May 16 00:11:55.747935 sshd-session[2047]: pam_unix(sshd:session): session closed for user core May 16 00:11:55.753303 systemd[1]: sshd@3-65.108.51.217:22-139.178.68.195:56242.service: Deactivated successfully. May 16 00:11:55.758805 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:11:55.760140 systemd-logind[1645]: Session 4 logged out. Waiting for processes to exit. May 16 00:11:55.761888 systemd-logind[1645]: Removed session 4. May 16 00:11:55.913918 systemd[1]: Started sshd@4-65.108.51.217:22-139.178.68.195:56252.service - OpenSSH per-connection server daemon (139.178.68.195:56252). May 16 00:11:56.922703 sshd[2055]: Accepted publickey for core from 139.178.68.195 port 56252 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:56.924689 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:56.932481 systemd-logind[1645]: New session 5 of user core. May 16 00:11:56.946838 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:11:57.455830 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:11:57.456335 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:11:57.473850 sudo[2059]: pam_unix(sudo:session): session closed for user root May 16 00:11:57.632224 sshd[2058]: Connection closed by 139.178.68.195 port 56252 May 16 00:11:57.633372 sshd-session[2055]: pam_unix(sshd:session): session closed for user core May 16 00:11:57.638381 systemd[1]: sshd@4-65.108.51.217:22-139.178.68.195:56252.service: Deactivated successfully. May 16 00:11:57.643568 systemd-logind[1645]: Session 5 logged out. Waiting for processes to exit. May 16 00:11:57.643899 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:11:57.646219 systemd-logind[1645]: Removed session 5. May 16 00:11:57.799441 systemd[1]: Started sshd@5-65.108.51.217:22-139.178.68.195:56268.service - OpenSSH per-connection server daemon (139.178.68.195:56268). May 16 00:11:58.794438 sshd[2064]: Accepted publickey for core from 139.178.68.195 port 56268 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:11:58.796627 sshd-session[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:11:58.804879 systemd-logind[1645]: New session 6 of user core. May 16 00:11:58.816039 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:11:59.319036 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:11:59.319562 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:11:59.325222 sudo[2069]: pam_unix(sudo:session): session closed for user root May 16 00:11:59.336135 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:11:59.336677 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:11:59.358795 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:11:59.418462 augenrules[2091]: No rules May 16 00:11:59.419675 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:11:59.420048 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:11:59.423835 sudo[2068]: pam_unix(sudo:session): session closed for user root May 16 00:11:59.583603 sshd[2067]: Connection closed by 139.178.68.195 port 56268 May 16 00:11:59.585500 sshd-session[2064]: pam_unix(sshd:session): session closed for user core May 16 00:11:59.591494 systemd-logind[1645]: Session 6 logged out. Waiting for processes to exit. May 16 00:11:59.593130 systemd[1]: sshd@5-65.108.51.217:22-139.178.68.195:56268.service: Deactivated successfully. May 16 00:11:59.597711 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:11:59.599250 systemd-logind[1645]: Removed session 6. May 16 00:11:59.749993 systemd[1]: Started sshd@6-65.108.51.217:22-139.178.68.195:56280.service - OpenSSH per-connection server daemon (139.178.68.195:56280). May 16 00:12:00.745222 sshd[2100]: Accepted publickey for core from 139.178.68.195 port 56280 ssh2: RSA SHA256:rqQn8f4zWKKVvbvoRTiGoZH0AJsG/+I2ZavTEdSdQRw May 16 00:12:00.747180 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:12:00.754693 systemd-logind[1645]: New session 7 of user core. May 16 00:12:00.765858 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:12:01.266104 sudo[2104]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:12:01.266627 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:12:02.099889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:12:02.109028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:12:02.154016 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-7.scope)... May 16 00:12:02.154184 systemd[1]: Reloading... May 16 00:12:02.254528 zram_generator::config[2177]: No configuration found. May 16 00:12:02.375323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:12:02.449778 systemd[1]: Reloading finished in 295 ms. May 16 00:12:02.498441 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:12:02.498506 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:12:02.498772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:12:02.504938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:12:02.602473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:12:02.615778 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:12:02.651853 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:12:02.651853 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:12:02.651853 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:12:02.651853 kubelet[2242]: I0516 00:12:02.649677 2242 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:12:02.858244 kubelet[2242]: I0516 00:12:02.858185 2242 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:12:02.858244 kubelet[2242]: I0516 00:12:02.858214 2242 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:12:02.858511 kubelet[2242]: I0516 00:12:02.858487 2242 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:12:02.888020 kubelet[2242]: I0516 00:12:02.887750 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:12:02.894250 kubelet[2242]: E0516 00:12:02.894207 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:12:02.894250 kubelet[2242]: I0516 00:12:02.894247 2242 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:12:02.902700 kubelet[2242]: I0516 00:12:02.901770 2242 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:12:02.902700 kubelet[2242]: I0516 00:12:02.902121 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:12:02.902700 kubelet[2242]: I0516 00:12:02.902227 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:12:02.902700 kubelet[2242]: I0516 00:12:02.902253 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 16 00:12:02.903133 kubelet[2242]: I0516 00:12:02.902465 2242 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:12:02.903133 kubelet[2242]: I0516 00:12:02.902473 2242 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:12:02.903133 kubelet[2242]: I0516 00:12:02.902581 2242 state_mem.go:36] "Initialized new in-memory state store" May 16 00:12:02.905484 kubelet[2242]: I0516 00:12:02.905445 2242 kubelet.go:408] "Attempting to sync node with API server" May 16 00:12:02.905484 kubelet[2242]: I0516 00:12:02.905473 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:12:02.905602 kubelet[2242]: I0516 00:12:02.905501 2242 kubelet.go:314] "Adding apiserver pod source" May 16 00:12:02.905602 kubelet[2242]: I0516 00:12:02.905522 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:12:02.909857 kubelet[2242]: E0516 00:12:02.908838 2242 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:02.909857 kubelet[2242]: E0516 00:12:02.908878 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:02.910043 kubelet[2242]: I0516 00:12:02.909942 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:12:02.910408 kubelet[2242]: I0516 00:12:02.910386 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:12:02.910507 kubelet[2242]: W0516 00:12:02.910432 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:12:02.912603 kubelet[2242]: I0516 00:12:02.912473 2242 server.go:1274] "Started kubelet" May 16 00:12:02.917645 kubelet[2242]: I0516 00:12:02.917632 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:12:02.923085 kubelet[2242]: I0516 00:12:02.922987 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:12:02.926711 kubelet[2242]: I0516 00:12:02.926664 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:12:02.929366 kubelet[2242]: W0516 00:12:02.927053 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 16 00:12:02.929366 kubelet[2242]: E0516 00:12:02.927093 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 00:12:02.929366 kubelet[2242]: I0516 00:12:02.927450 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:12:02.929366 kubelet[2242]: I0516 00:12:02.928418 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:12:02.930215 kubelet[2242]: E0516 00:12:02.927277 2242 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.183fd97fb1f0a359 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-05-16 00:12:02.912453465 +0000 UTC m=+0.293678469,LastTimestamp:2025-05-16 00:12:02.912453465 +0000 UTC m=+0.293678469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" May 16 00:12:02.933353 kubelet[2242]: W0516 00:12:02.931760 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 16 00:12:02.935395 kubelet[2242]: E0516 00:12:02.935375 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 00:12:02.936254 kubelet[2242]: E0516 00:12:02.936198 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:02.937224 kubelet[2242]: I0516 00:12:02.937188 2242 server.go:449] "Adding debug handlers to kubelet server" May 16 00:12:02.938684 kubelet[2242]: I0516 00:12:02.934749 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:12:02.938778 kubelet[2242]: I0516 00:12:02.934782 2242 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:12:02.938861 kubelet[2242]: I0516 00:12:02.938834 2242 reconciler.go:26] "Reconciler: start to sync state" May 16 00:12:02.942358 kubelet[2242]: E0516 00:12:02.941707 2242 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.183fd97fb240ef8c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-05-16 00:12:02.917715852 +0000 UTC m=+0.298940867,LastTimestamp:2025-05-16 00:12:02.917715852 +0000 UTC m=+0.298940867,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" May 16 00:12:02.945619 kubelet[2242]: I0516 00:12:02.942976 2242 factory.go:221] Registration of the systemd container factory successfully May 16 00:12:02.945619 kubelet[2242]: I0516 00:12:02.943077 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:12:02.945619 kubelet[2242]: E0516 00:12:02.943452 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:12:02.948012 kubelet[2242]: I0516 00:12:02.946457 2242 factory.go:221] Registration of the containerd container factory successfully May 16 00:12:02.977073 kubelet[2242]: E0516 00:12:02.977047 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.4\" not found" node="10.0.0.4" May 16 00:12:02.981712 kubelet[2242]: I0516 00:12:02.981278 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:12:02.981712 kubelet[2242]: I0516 00:12:02.981308 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:12:02.981712 kubelet[2242]: I0516 00:12:02.981327 2242 state_mem.go:36] "Initialized new in-memory state store" May 16 00:12:02.985559 kubelet[2242]: I0516 00:12:02.985370 2242 policy_none.go:49] "None policy: Start" May 16 00:12:02.987312 kubelet[2242]: I0516 00:12:02.987012 2242 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:12:02.987312 kubelet[2242]: I0516 00:12:02.987034 2242 state_mem.go:35] "Initializing new in-memory state store" May 16 00:12:02.988846 kubelet[2242]: I0516 00:12:02.988766 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:12:02.989583 kubelet[2242]: I0516 00:12:02.989563 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:12:02.989583 kubelet[2242]: I0516 00:12:02.989583 2242 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:12:02.989669 kubelet[2242]: I0516 00:12:02.989601 2242 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:12:02.989713 kubelet[2242]: E0516 00:12:02.989688 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:12:02.998562 kubelet[2242]: I0516 00:12:02.997746 2242 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:12:02.998562 kubelet[2242]: I0516 00:12:02.997909 2242 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:12:02.998562 kubelet[2242]: I0516 00:12:02.997919 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:12:02.999395 kubelet[2242]: I0516 00:12:02.999374 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:12:03.000757 kubelet[2242]: E0516 00:12:03.000740 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" May 16 00:12:03.100090 kubelet[2242]: I0516 00:12:03.100051 2242 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.4" May 16 00:12:03.112043 kubelet[2242]: I0516 00:12:03.111987 2242 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.4" May 16 00:12:03.112043 kubelet[2242]: E0516 00:12:03.112034 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" May 16 00:12:03.124830 kubelet[2242]: E0516 00:12:03.124767 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.148654 sudo[2104]: pam_unix(sudo:session): session closed for user root May 16 00:12:03.225710 kubelet[2242]: E0516 00:12:03.225529 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.306733 sshd[2103]: Connection closed by 139.178.68.195 port 56280 May 16 00:12:03.307774 sshd-session[2100]: pam_unix(sshd:session): session closed for user core May 16 00:12:03.315254 systemd[1]: sshd@6-65.108.51.217:22-139.178.68.195:56280.service: Deactivated successfully. May 16 00:12:03.316639 systemd-logind[1645]: Session 7 logged out. Waiting for processes to exit. May 16 00:12:03.322925 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:12:03.325400 systemd-logind[1645]: Removed session 7. May 16 00:12:03.326092 kubelet[2242]: E0516 00:12:03.326040 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.426831 kubelet[2242]: E0516 00:12:03.426732 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.527549 kubelet[2242]: E0516 00:12:03.527474 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.628675 kubelet[2242]: E0516 00:12:03.628581 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.729674 kubelet[2242]: E0516 00:12:03.729581 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.830875 kubelet[2242]: E0516 00:12:03.830679 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:03.864637 kubelet[2242]: I0516 00:12:03.864549 2242 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 00:12:03.864856 kubelet[2242]: W0516 00:12:03.864815 2242 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:12:03.864925 kubelet[2242]: W0516 00:12:03.864864 2242 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:12:03.909601 kubelet[2242]: E0516 00:12:03.909486 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:03.930920 kubelet[2242]: E0516 00:12:03.930862 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:04.031955 kubelet[2242]: E0516 00:12:04.031891 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" May 16 00:12:04.133684 kubelet[2242]: I0516 00:12:04.133535 2242 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 00:12:04.134406 containerd[1661]: time="2025-05-16T00:12:04.134284120Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:12:04.135052 kubelet[2242]: I0516 00:12:04.134769 2242 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 00:12:04.910510 kubelet[2242]: I0516 00:12:04.910423 2242 apiserver.go:52] "Watching apiserver" May 16 00:12:04.910510 kubelet[2242]: E0516 00:12:04.910448 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:04.917502 kubelet[2242]: E0516 00:12:04.917285 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:04.940331 kubelet[2242]: I0516 00:12:04.940268 2242 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:12:04.951089 kubelet[2242]: I0516 00:12:04.950928 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-cni-net-dir\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951089 kubelet[2242]: I0516 00:12:04.950974 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4489e766-665c-43de-9fa1-37b4866cf374-node-certs\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951089 kubelet[2242]: I0516 00:12:04.951000 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4489e766-665c-43de-9fa1-37b4866cf374-tigera-ca-bundle\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951089 kubelet[2242]: I0516 00:12:04.951021 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-var-lib-calico\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951089 kubelet[2242]: I0516 00:12:04.951041 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0fb2e616-5516-4e0a-a947-fe8eddf1e618-registration-dir\") pod \"csi-node-driver-kvvxm\" (UID: \"0fb2e616-5516-4e0a-a947-fe8eddf1e618\") " pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:04.951330 kubelet[2242]: I0516 00:12:04.951062 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/647041ca-a937-429b-9f7f-e5069be7e1a1-xtables-lock\") pod \"kube-proxy-hmvft\" (UID: \"647041ca-a937-429b-9f7f-e5069be7e1a1\") " pod="kube-system/kube-proxy-hmvft" May 16 00:12:04.951330 kubelet[2242]: I0516 00:12:04.951083 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-cni-log-dir\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951330 kubelet[2242]: I0516 00:12:04.951145 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-lib-modules\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951330 kubelet[2242]: I0516 00:12:04.951167 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-policysync\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951330 kubelet[2242]: I0516 00:12:04.951198 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-var-run-calico\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951434 kubelet[2242]: I0516 00:12:04.951223 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0fb2e616-5516-4e0a-a947-fe8eddf1e618-varrun\") pod \"csi-node-driver-kvvxm\" (UID: \"0fb2e616-5516-4e0a-a947-fe8eddf1e618\") " pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:04.951434 kubelet[2242]: I0516 00:12:04.951244 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9th95\" (UniqueName: \"kubernetes.io/projected/647041ca-a937-429b-9f7f-e5069be7e1a1-kube-api-access-9th95\") pod \"kube-proxy-hmvft\" (UID: \"647041ca-a937-429b-9f7f-e5069be7e1a1\") " pod="kube-system/kube-proxy-hmvft" May 16 00:12:04.951434 kubelet[2242]: I0516 00:12:04.951283 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-cni-bin-dir\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951434 kubelet[2242]: I0516 00:12:04.951316 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fb2e616-5516-4e0a-a947-fe8eddf1e618-kubelet-dir\") pod \"csi-node-driver-kvvxm\" (UID: \"0fb2e616-5516-4e0a-a947-fe8eddf1e618\") " pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:04.951434 kubelet[2242]: I0516 00:12:04.951341 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4wg\" (UniqueName: \"kubernetes.io/projected/0fb2e616-5516-4e0a-a947-fe8eddf1e618-kube-api-access-4t4wg\") pod \"csi-node-driver-kvvxm\" (UID: \"0fb2e616-5516-4e0a-a947-fe8eddf1e618\") " pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:04.951536 kubelet[2242]: I0516 00:12:04.951378 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/647041ca-a937-429b-9f7f-e5069be7e1a1-kube-proxy\") pod \"kube-proxy-hmvft\" (UID: \"647041ca-a937-429b-9f7f-e5069be7e1a1\") " pod="kube-system/kube-proxy-hmvft" May 16 00:12:04.951536 kubelet[2242]: I0516 00:12:04.951398 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/647041ca-a937-429b-9f7f-e5069be7e1a1-lib-modules\") pod \"kube-proxy-hmvft\" (UID: \"647041ca-a937-429b-9f7f-e5069be7e1a1\") " pod="kube-system/kube-proxy-hmvft" May 16 00:12:04.951536 kubelet[2242]: I0516 00:12:04.951419 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-xtables-lock\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951536 kubelet[2242]: I0516 00:12:04.951464 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmjn\" (UniqueName: \"kubernetes.io/projected/4489e766-665c-43de-9fa1-37b4866cf374-kube-api-access-pfmjn\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:04.951536 kubelet[2242]: I0516 00:12:04.951483 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0fb2e616-5516-4e0a-a947-fe8eddf1e618-socket-dir\") pod \"csi-node-driver-kvvxm\" (UID: \"0fb2e616-5516-4e0a-a947-fe8eddf1e618\") " pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:04.951654 kubelet[2242]: I0516 00:12:04.951513 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4489e766-665c-43de-9fa1-37b4866cf374-flexvol-driver-host\") pod \"calico-node-8rdzq\" (UID: \"4489e766-665c-43de-9fa1-37b4866cf374\") " pod="calico-system/calico-node-8rdzq" May 16 00:12:05.054964 kubelet[2242]: E0516 00:12:05.054936 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.055197 kubelet[2242]: W0516 00:12:05.055008 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.055197 kubelet[2242]: E0516 00:12:05.055034 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.055535 kubelet[2242]: E0516 00:12:05.055460 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.055535 kubelet[2242]: W0516 00:12:05.055470 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.055535 kubelet[2242]: E0516 00:12:05.055484 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.056805 kubelet[2242]: E0516 00:12:05.056772 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.057381 kubelet[2242]: W0516 00:12:05.056886 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.057381 kubelet[2242]: E0516 00:12:05.056909 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.057693 kubelet[2242]: E0516 00:12:05.057598 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.057693 kubelet[2242]: W0516 00:12:05.057621 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.057693 kubelet[2242]: E0516 00:12:05.057630 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.060453 kubelet[2242]: E0516 00:12:05.058798 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.060453 kubelet[2242]: W0516 00:12:05.060377 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.060453 kubelet[2242]: E0516 00:12:05.060405 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.061064 kubelet[2242]: E0516 00:12:05.060850 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.061064 kubelet[2242]: W0516 00:12:05.060878 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.061064 kubelet[2242]: E0516 00:12:05.060962 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.062556 kubelet[2242]: E0516 00:12:05.062418 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.062556 kubelet[2242]: W0516 00:12:05.062436 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.062867 kubelet[2242]: E0516 00:12:05.062744 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.063200 kubelet[2242]: E0516 00:12:05.063142 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.063200 kubelet[2242]: W0516 00:12:05.063154 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.063429 kubelet[2242]: E0516 00:12:05.063287 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.064578 kubelet[2242]: E0516 00:12:05.064416 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.064578 kubelet[2242]: W0516 00:12:05.064433 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.064578 kubelet[2242]: E0516 00:12:05.064452 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.064801 kubelet[2242]: E0516 00:12:05.064791 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.065309 kubelet[2242]: W0516 00:12:05.064858 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.065309 kubelet[2242]: E0516 00:12:05.064873 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.068539 kubelet[2242]: E0516 00:12:05.068512 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.068678 kubelet[2242]: W0516 00:12:05.068662 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.069594 kubelet[2242]: E0516 00:12:05.068733 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.072487 kubelet[2242]: E0516 00:12:05.072458 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.072487 kubelet[2242]: W0516 00:12:05.072483 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.072639 kubelet[2242]: E0516 00:12:05.072506 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.072815 kubelet[2242]: E0516 00:12:05.072799 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.072815 kubelet[2242]: W0516 00:12:05.072813 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.073488 kubelet[2242]: E0516 00:12:05.073471 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.073488 kubelet[2242]: W0516 00:12:05.073487 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.073624 kubelet[2242]: E0516 00:12:05.073586 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.073660 kubelet[2242]: E0516 00:12:05.073630 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.074566 kubelet[2242]: E0516 00:12:05.074545 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.074566 kubelet[2242]: W0516 00:12:05.074561 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.074636 kubelet[2242]: E0516 00:12:05.074597 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.076128 kubelet[2242]: E0516 00:12:05.076070 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.076128 kubelet[2242]: W0516 00:12:05.076084 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.076128 kubelet[2242]: E0516 00:12:05.076099 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.080704 kubelet[2242]: E0516 00:12:05.080597 2242 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:12:05.080704 kubelet[2242]: W0516 00:12:05.080624 2242 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:12:05.080704 kubelet[2242]: E0516 00:12:05.080650 2242 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:12:05.220257 containerd[1661]: time="2025-05-16T00:12:05.220081659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmvft,Uid:647041ca-a937-429b-9f7f-e5069be7e1a1,Namespace:kube-system,Attempt:0,}" May 16 00:12:05.222653 containerd[1661]: time="2025-05-16T00:12:05.222067858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rdzq,Uid:4489e766-665c-43de-9fa1-37b4866cf374,Namespace:calico-system,Attempt:0,}" May 16 00:12:05.781489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811411626.mount: Deactivated successfully. May 16 00:12:05.792605 containerd[1661]: time="2025-05-16T00:12:05.792518882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:12:05.795221 containerd[1661]: time="2025-05-16T00:12:05.795134040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:12:05.796672 containerd[1661]: time="2025-05-16T00:12:05.796583030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 16 00:12:05.797963 containerd[1661]: time="2025-05-16T00:12:05.797890765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:12:05.799258 containerd[1661]: time="2025-05-16T00:12:05.799174415Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:12:05.802150 containerd[1661]: time="2025-05-16T00:12:05.802046465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:12:05.804011 containerd[1661]: time="2025-05-16T00:12:05.803604219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.383969ms" May 16 00:12:05.807170 containerd[1661]: time="2025-05-16T00:12:05.807006765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.803544ms" May 16 00:12:05.912999 kubelet[2242]: E0516 00:12:05.912918 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:05.929975 containerd[1661]: time="2025-05-16T00:12:05.929857413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:05.929975 containerd[1661]: time="2025-05-16T00:12:05.929965225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:05.930163 containerd[1661]: time="2025-05-16T00:12:05.929993118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:05.930225 containerd[1661]: time="2025-05-16T00:12:05.930154942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:05.932760 containerd[1661]: time="2025-05-16T00:12:05.929538394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:05.933034 containerd[1661]: time="2025-05-16T00:12:05.932986707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:05.933941 containerd[1661]: time="2025-05-16T00:12:05.933880525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:05.934272 containerd[1661]: time="2025-05-16T00:12:05.934200415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:06.064400 containerd[1661]: time="2025-05-16T00:12:06.062176262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rdzq,Uid:4489e766-665c-43de-9fa1-37b4866cf374,Namespace:calico-system,Attempt:0,} returns sandbox id \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\"" May 16 00:12:06.067729 containerd[1661]: time="2025-05-16T00:12:06.067682687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 00:12:06.077330 containerd[1661]: time="2025-05-16T00:12:06.076914185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmvft,Uid:647041ca-a937-429b-9f7f-e5069be7e1a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"331b68d0519c05ab1ec0b5294f589beaae7006480a9ed9cd104bf2acd6c7bdfb\"" May 16 00:12:06.913197 kubelet[2242]: E0516 00:12:06.913109 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:06.990814 kubelet[2242]: E0516 00:12:06.990292 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:07.913986 kubelet[2242]: E0516 00:12:07.913864 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:08.267941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447486348.mount: Deactivated successfully. May 16 00:12:08.362839 containerd[1661]: time="2025-05-16T00:12:08.362773186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:08.364172 containerd[1661]: time="2025-05-16T00:12:08.364129501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 16 00:12:08.365243 containerd[1661]: time="2025-05-16T00:12:08.365190783Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:08.367947 containerd[1661]: time="2025-05-16T00:12:08.367898235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:08.368558 containerd[1661]: time="2025-05-16T00:12:08.368523288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.300709416s" May 16 00:12:08.368605 containerd[1661]: time="2025-05-16T00:12:08.368561680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 00:12:08.370388 containerd[1661]: time="2025-05-16T00:12:08.370331983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:12:08.371340 containerd[1661]: time="2025-05-16T00:12:08.371281424Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 00:12:08.390555 containerd[1661]: time="2025-05-16T00:12:08.390501710Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b\"" May 16 00:12:08.391288 containerd[1661]: time="2025-05-16T00:12:08.391257037Z" level=info msg="StartContainer for \"78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b\"" May 16 00:12:08.453141 containerd[1661]: time="2025-05-16T00:12:08.453087678Z" level=info msg="StartContainer for \"78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b\" returns successfully" May 16 00:12:08.502653 containerd[1661]: time="2025-05-16T00:12:08.502582795Z" level=info msg="shim disconnected" id=78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b namespace=k8s.io May 16 00:12:08.502653 containerd[1661]: time="2025-05-16T00:12:08.502644492Z" level=warning msg="cleaning up after shim disconnected" id=78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b namespace=k8s.io May 16 00:12:08.502653 containerd[1661]: time="2025-05-16T00:12:08.502654781Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:12:08.914282 kubelet[2242]: E0516 00:12:08.914219 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:08.991104 kubelet[2242]: E0516 00:12:08.991032 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:09.214163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ca836236b5c6f7d4e07ae7422fe68f46b8f3eab876e9a688bd838f27f3846b-rootfs.mount: Deactivated successfully. May 16 00:12:09.392987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250881277.mount: Deactivated successfully. May 16 00:12:09.721989 containerd[1661]: time="2025-05-16T00:12:09.721930629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:09.723008 containerd[1661]: time="2025-05-16T00:12:09.722961935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355651" May 16 00:12:09.724111 containerd[1661]: time="2025-05-16T00:12:09.724071327Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:09.725909 containerd[1661]: time="2025-05-16T00:12:09.725873779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:09.726573 containerd[1661]: time="2025-05-16T00:12:09.726427699Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.356046745s" May 16 00:12:09.726573 containerd[1661]: time="2025-05-16T00:12:09.726466102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 00:12:09.727971 containerd[1661]: time="2025-05-16T00:12:09.727922976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 00:12:09.728855 containerd[1661]: time="2025-05-16T00:12:09.728826973Z" level=info msg="CreateContainer within sandbox \"331b68d0519c05ab1ec0b5294f589beaae7006480a9ed9cd104bf2acd6c7bdfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:12:09.752512 containerd[1661]: time="2025-05-16T00:12:09.752443018Z" level=info msg="CreateContainer within sandbox \"331b68d0519c05ab1ec0b5294f589beaae7006480a9ed9cd104bf2acd6c7bdfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"143f6a700139c1b5251504da8ee64746d9728098f62b763791fe587f62053e5f\"" May 16 00:12:09.753200 containerd[1661]: time="2025-05-16T00:12:09.753149874Z" level=info msg="StartContainer for \"143f6a700139c1b5251504da8ee64746d9728098f62b763791fe587f62053e5f\"" May 16 00:12:09.814880 containerd[1661]: time="2025-05-16T00:12:09.814713592Z" level=info msg="StartContainer for \"143f6a700139c1b5251504da8ee64746d9728098f62b763791fe587f62053e5f\" returns successfully" May 16 00:12:09.915300 kubelet[2242]: E0516 00:12:09.915152 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:10.032472 kubelet[2242]: I0516 00:12:10.032307 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmvft" podStartSLOduration=3.383213145 podStartE2EDuration="7.032278479s" podCreationTimestamp="2025-05-16 00:12:03 +0000 UTC" firstStartedPulling="2025-05-16 00:12:06.078234512 +0000 UTC m=+3.459459507" lastFinishedPulling="2025-05-16 00:12:09.727299836 +0000 UTC m=+7.108524841" observedRunningTime="2025-05-16 00:12:10.032084213 +0000 UTC m=+7.413309248" watchObservedRunningTime="2025-05-16 00:12:10.032278479 +0000 UTC m=+7.413503514" May 16 00:12:10.916387 kubelet[2242]: E0516 00:12:10.916326 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:10.990476 kubelet[2242]: E0516 00:12:10.990031 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:11.916547 kubelet[2242]: E0516 00:12:11.916450 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:12.917686 kubelet[2242]: E0516 00:12:12.917576 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:12.990753 kubelet[2242]: E0516 00:12:12.990506 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:13.917987 kubelet[2242]: E0516 00:12:13.917907 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:13.923990 containerd[1661]: time="2025-05-16T00:12:13.923909126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:13.925435 containerd[1661]: time="2025-05-16T00:12:13.925378605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 00:12:13.926958 containerd[1661]: time="2025-05-16T00:12:13.926887135Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:13.930557 containerd[1661]: time="2025-05-16T00:12:13.930451936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:13.931368 containerd[1661]: time="2025-05-16T00:12:13.931143444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.203196774s" May 16 00:12:13.931368 containerd[1661]: time="2025-05-16T00:12:13.931178159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 00:12:13.933832 containerd[1661]: time="2025-05-16T00:12:13.933779501Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 00:12:13.954169 containerd[1661]: time="2025-05-16T00:12:13.954075492Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151\"" May 16 00:12:13.954895 containerd[1661]: time="2025-05-16T00:12:13.954804210Z" level=info msg="StartContainer for \"6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151\"" May 16 00:12:14.015433 containerd[1661]: time="2025-05-16T00:12:14.015288348Z" level=info msg="StartContainer for \"6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151\" returns successfully" May 16 00:12:14.632130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151-rootfs.mount: Deactivated successfully. May 16 00:12:14.645983 containerd[1661]: time="2025-05-16T00:12:14.645906722Z" level=info msg="shim disconnected" id=6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151 namespace=k8s.io May 16 00:12:14.645983 containerd[1661]: time="2025-05-16T00:12:14.645969119Z" level=warning msg="cleaning up after shim disconnected" id=6f7093ca91a9b3d0c5a099996e2ef4d0cbcd6361d37d6d7a1deb43db442a7151 namespace=k8s.io May 16 00:12:14.645983 containerd[1661]: time="2025-05-16T00:12:14.645976684Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:12:14.704441 kubelet[2242]: I0516 00:12:14.704387 2242 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:12:14.919291 kubelet[2242]: E0516 00:12:14.919080 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:14.997712 containerd[1661]: time="2025-05-16T00:12:14.997618351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:0,}" May 16 00:12:15.037485 containerd[1661]: time="2025-05-16T00:12:15.037429764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 00:12:15.086511 containerd[1661]: time="2025-05-16T00:12:15.086439272Z" level=error msg="Failed to destroy network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:15.089964 containerd[1661]: time="2025-05-16T00:12:15.087133555Z" level=error msg="encountered an error cleaning up failed sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:15.089964 containerd[1661]: time="2025-05-16T00:12:15.087248551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:15.090074 kubelet[2242]: E0516 00:12:15.089510 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:15.090074 kubelet[2242]: E0516 00:12:15.089574 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:15.090074 kubelet[2242]: E0516 00:12:15.089592 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:15.089245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab-shm.mount: Deactivated successfully. May 16 00:12:15.090281 kubelet[2242]: E0516 00:12:15.089631 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:15.919375 kubelet[2242]: E0516 00:12:15.919243 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:16.037984 kubelet[2242]: I0516 00:12:16.037936 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab" May 16 00:12:16.039012 containerd[1661]: time="2025-05-16T00:12:16.038964615Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:16.039501 containerd[1661]: time="2025-05-16T00:12:16.039234913Z" level=info msg="Ensure that sandbox f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab in task-service has been cleanup successfully" May 16 00:12:16.041463 containerd[1661]: time="2025-05-16T00:12:16.041420284Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:16.041463 containerd[1661]: time="2025-05-16T00:12:16.041450932Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:16.042573 containerd[1661]: time="2025-05-16T00:12:16.042169120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:1,}" May 16 00:12:16.042584 systemd[1]: run-netns-cni\x2dad9fbde5\x2d2efd\x2d430f\x2d4370\x2d5698ff74823b.mount: Deactivated successfully. May 16 00:12:16.142899 containerd[1661]: time="2025-05-16T00:12:16.142811793Z" level=error msg="Failed to destroy network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:16.145706 containerd[1661]: time="2025-05-16T00:12:16.145607479Z" level=error msg="encountered an error cleaning up failed sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:16.145837 containerd[1661]: time="2025-05-16T00:12:16.145771638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:16.146168 kubelet[2242]: E0516 00:12:16.146125 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:16.146958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e-shm.mount: Deactivated successfully. May 16 00:12:16.147191 kubelet[2242]: E0516 00:12:16.146967 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:16.147191 kubelet[2242]: E0516 00:12:16.147017 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:16.147191 kubelet[2242]: E0516 00:12:16.147082 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:16.920488 kubelet[2242]: E0516 00:12:16.920405 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:17.043104 kubelet[2242]: I0516 00:12:17.042968 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e" May 16 00:12:17.044237 containerd[1661]: time="2025-05-16T00:12:17.044170031Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:17.044836 containerd[1661]: time="2025-05-16T00:12:17.044533645Z" level=info msg="Ensure that sandbox b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e in task-service has been cleanup successfully" May 16 00:12:17.047387 containerd[1661]: time="2025-05-16T00:12:17.044923015Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:17.047387 containerd[1661]: time="2025-05-16T00:12:17.044971947Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:17.047387 containerd[1661]: time="2025-05-16T00:12:17.046214239Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:17.048443 containerd[1661]: time="2025-05-16T00:12:17.047732597Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:17.048443 containerd[1661]: time="2025-05-16T00:12:17.047763435Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:17.050166 containerd[1661]: time="2025-05-16T00:12:17.050126310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:2,}" May 16 00:12:17.050704 systemd[1]: run-netns-cni\x2dcd3dcf5c\x2dab26\x2d9d4d\x2d3dc2\x2dcf54fe4689c7.mount: Deactivated successfully. May 16 00:12:17.145501 containerd[1661]: time="2025-05-16T00:12:17.145424671Z" level=error msg="Failed to destroy network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:17.147490 containerd[1661]: time="2025-05-16T00:12:17.147234167Z" level=error msg="encountered an error cleaning up failed sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:17.147490 containerd[1661]: time="2025-05-16T00:12:17.147438981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:17.148134 kubelet[2242]: E0516 00:12:17.147734 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:17.148134 kubelet[2242]: E0516 00:12:17.147800 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:17.148134 kubelet[2242]: E0516 00:12:17.147828 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:17.148377 kubelet[2242]: E0516 00:12:17.147881 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:17.149292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11-shm.mount: Deactivated successfully. May 16 00:12:17.921188 kubelet[2242]: E0516 00:12:17.921078 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:18.047786 kubelet[2242]: I0516 00:12:18.047720 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11" May 16 00:12:18.048682 containerd[1661]: time="2025-05-16T00:12:18.048639810Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:18.049700 containerd[1661]: time="2025-05-16T00:12:18.049637413Z" level=info msg="Ensure that sandbox 5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11 in task-service has been cleanup successfully" May 16 00:12:18.051497 containerd[1661]: time="2025-05-16T00:12:18.051456948Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:18.051497 containerd[1661]: time="2025-05-16T00:12:18.051494278Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:18.052117 containerd[1661]: time="2025-05-16T00:12:18.051881434Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:18.052117 containerd[1661]: time="2025-05-16T00:12:18.051999155Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:18.052117 containerd[1661]: time="2025-05-16T00:12:18.052027689Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:18.053892 systemd[1]: run-netns-cni\x2df33749c3\x2dcc4c\x2d7bdd\x2d7131\x2d3ba84d41b4a8.mount: Deactivated successfully. May 16 00:12:18.054887 containerd[1661]: time="2025-05-16T00:12:18.054470624Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:18.054887 containerd[1661]: time="2025-05-16T00:12:18.054623280Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:18.054887 containerd[1661]: time="2025-05-16T00:12:18.054643478Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:18.057638 containerd[1661]: time="2025-05-16T00:12:18.056328681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:3,}" May 16 00:12:18.146134 containerd[1661]: time="2025-05-16T00:12:18.146051858Z" level=error msg="Failed to destroy network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:18.148510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7-shm.mount: Deactivated successfully. May 16 00:12:18.148768 containerd[1661]: time="2025-05-16T00:12:18.148583358Z" level=error msg="encountered an error cleaning up failed sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:18.148768 containerd[1661]: time="2025-05-16T00:12:18.148668157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:18.149397 kubelet[2242]: E0516 00:12:18.148904 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:18.149397 kubelet[2242]: E0516 00:12:18.148959 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:18.149397 kubelet[2242]: E0516 00:12:18.148983 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:18.149536 kubelet[2242]: E0516 00:12:18.149024 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:18.921378 kubelet[2242]: E0516 00:12:18.921263 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:19.051834 kubelet[2242]: I0516 00:12:19.051800 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7" May 16 00:12:19.053618 containerd[1661]: time="2025-05-16T00:12:19.053114254Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:19.053618 containerd[1661]: time="2025-05-16T00:12:19.053447960Z" level=info msg="Ensure that sandbox dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7 in task-service has been cleanup successfully" May 16 00:12:19.056424 systemd[1]: run-netns-cni\x2d08e855bc\x2d2528\x2d3efe\x2d1e21\x2d4e3c96c8131b.mount: Deactivated successfully. May 16 00:12:19.057484 containerd[1661]: time="2025-05-16T00:12:19.057311560Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:19.057484 containerd[1661]: time="2025-05-16T00:12:19.057386140Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:19.058445 containerd[1661]: time="2025-05-16T00:12:19.058408589Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:19.058531 containerd[1661]: time="2025-05-16T00:12:19.058505972Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:19.058531 containerd[1661]: time="2025-05-16T00:12:19.058520600Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:19.059470 containerd[1661]: time="2025-05-16T00:12:19.059428794Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:19.059519 containerd[1661]: time="2025-05-16T00:12:19.059491832Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:19.059519 containerd[1661]: time="2025-05-16T00:12:19.059500719Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:19.059951 containerd[1661]: time="2025-05-16T00:12:19.059811783Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:19.059951 containerd[1661]: time="2025-05-16T00:12:19.059910578Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:19.059951 containerd[1661]: time="2025-05-16T00:12:19.059943179Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:19.060862 containerd[1661]: time="2025-05-16T00:12:19.060748911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:4,}" May 16 00:12:19.137651 containerd[1661]: time="2025-05-16T00:12:19.137554123Z" level=error msg="Failed to destroy network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:19.138278 containerd[1661]: time="2025-05-16T00:12:19.137914500Z" level=error msg="encountered an error cleaning up failed sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:19.138278 containerd[1661]: time="2025-05-16T00:12:19.137966377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:19.140422 kubelet[2242]: E0516 00:12:19.138328 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:19.140422 kubelet[2242]: E0516 00:12:19.138404 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:19.140422 kubelet[2242]: E0516 00:12:19.138426 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:19.140141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490-shm.mount: Deactivated successfully. May 16 00:12:19.140663 kubelet[2242]: E0516 00:12:19.138475 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:19.921803 kubelet[2242]: E0516 00:12:19.921726 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:20.059316 kubelet[2242]: I0516 00:12:20.058779 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490" May 16 00:12:20.060519 containerd[1661]: time="2025-05-16T00:12:20.060273378Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:12:20.061284 containerd[1661]: time="2025-05-16T00:12:20.060736407Z" level=info msg="Ensure that sandbox 66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490 in task-service has been cleanup successfully" May 16 00:12:20.064449 containerd[1661]: time="2025-05-16T00:12:20.063553754Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:12:20.064449 containerd[1661]: time="2025-05-16T00:12:20.063606814Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:12:20.064449 containerd[1661]: time="2025-05-16T00:12:20.064250573Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:20.064718 containerd[1661]: time="2025-05-16T00:12:20.064472218Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:20.064718 containerd[1661]: time="2025-05-16T00:12:20.064500671Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:20.067705 containerd[1661]: time="2025-05-16T00:12:20.065074409Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:20.067705 containerd[1661]: time="2025-05-16T00:12:20.065235000Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:20.067705 containerd[1661]: time="2025-05-16T00:12:20.065260017Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:20.068616 systemd[1]: run-netns-cni\x2d62c3d99f\x2d1885\x2d54bf\x2dc226\x2d05f6e522afac.mount: Deactivated successfully. May 16 00:12:20.069988 containerd[1661]: time="2025-05-16T00:12:20.068897734Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:20.072042 containerd[1661]: time="2025-05-16T00:12:20.070505361Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:20.072042 containerd[1661]: time="2025-05-16T00:12:20.070603836Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:20.072042 containerd[1661]: time="2025-05-16T00:12:20.071689854Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:20.072402 containerd[1661]: time="2025-05-16T00:12:20.071979056Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:20.072402 containerd[1661]: time="2025-05-16T00:12:20.072145399Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:20.074440 containerd[1661]: time="2025-05-16T00:12:20.073868251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:5,}" May 16 00:12:20.182046 containerd[1661]: time="2025-05-16T00:12:20.181885198Z" level=error msg="Failed to destroy network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:20.183435 containerd[1661]: time="2025-05-16T00:12:20.182281652Z" level=error msg="encountered an error cleaning up failed sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:20.183435 containerd[1661]: time="2025-05-16T00:12:20.182383554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:20.185486 kubelet[2242]: E0516 00:12:20.184013 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:20.185657 kubelet[2242]: E0516 00:12:20.185493 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:20.185697 kubelet[2242]: E0516 00:12:20.185659 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:20.186051 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e-shm.mount: Deactivated successfully. May 16 00:12:20.186481 kubelet[2242]: E0516 00:12:20.186040 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:20.922568 kubelet[2242]: E0516 00:12:20.922495 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:21.066402 kubelet[2242]: I0516 00:12:21.066090 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e" May 16 00:12:21.067185 containerd[1661]: time="2025-05-16T00:12:21.067052586Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:12:21.068056 containerd[1661]: time="2025-05-16T00:12:21.067868067Z" level=info msg="Ensure that sandbox 797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e in task-service has been cleanup successfully" May 16 00:12:21.070469 containerd[1661]: time="2025-05-16T00:12:21.068387370Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:12:21.070469 containerd[1661]: time="2025-05-16T00:12:21.068407087Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:12:21.071645 systemd[1]: run-netns-cni\x2d31994fbd\x2de748\x2da893\x2d20e9\x2ddb6b44bf2f4b.mount: Deactivated successfully. May 16 00:12:21.072219 containerd[1661]: time="2025-05-16T00:12:21.072170289Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:12:21.073715 containerd[1661]: time="2025-05-16T00:12:21.072951747Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:12:21.073715 containerd[1661]: time="2025-05-16T00:12:21.072980310Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:12:21.074972 containerd[1661]: time="2025-05-16T00:12:21.074923367Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:21.075092 containerd[1661]: time="2025-05-16T00:12:21.075056667Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:21.075092 containerd[1661]: time="2025-05-16T00:12:21.075083698Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:21.075730 containerd[1661]: time="2025-05-16T00:12:21.075692800Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:21.076394 containerd[1661]: time="2025-05-16T00:12:21.076201586Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:21.076394 containerd[1661]: time="2025-05-16T00:12:21.076221624Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:21.077038 containerd[1661]: time="2025-05-16T00:12:21.076865943Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:21.077217 containerd[1661]: time="2025-05-16T00:12:21.077098599Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:21.077217 containerd[1661]: time="2025-05-16T00:12:21.077117094Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:21.077618 containerd[1661]: time="2025-05-16T00:12:21.077513738Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:21.077688 containerd[1661]: time="2025-05-16T00:12:21.077636588Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:21.077688 containerd[1661]: time="2025-05-16T00:12:21.077651386Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:21.078527 containerd[1661]: time="2025-05-16T00:12:21.078310804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:6,}" May 16 00:12:21.153659 containerd[1661]: time="2025-05-16T00:12:21.153575573Z" level=error msg="Failed to destroy network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.157051 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce-shm.mount: Deactivated successfully. May 16 00:12:21.157773 containerd[1661]: time="2025-05-16T00:12:21.157737112Z" level=error msg="encountered an error cleaning up failed sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.157854 containerd[1661]: time="2025-05-16T00:12:21.157819086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.159287 kubelet[2242]: E0516 00:12:21.159249 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.159403 kubelet[2242]: E0516 00:12:21.159317 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:21.159403 kubelet[2242]: E0516 00:12:21.159370 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:21.159486 kubelet[2242]: E0516 00:12:21.159435 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:21.464011 kubelet[2242]: I0516 00:12:21.463968 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9sm\" (UniqueName: \"kubernetes.io/projected/f4d71287-81a8-4c56-92d4-5d01dc562d29-kube-api-access-bk9sm\") pod \"nginx-deployment-8587fbcb89-ghpwx\" (UID: \"f4d71287-81a8-4c56-92d4-5d01dc562d29\") " pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:21.683231 containerd[1661]: time="2025-05-16T00:12:21.682908328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:0,}" May 16 00:12:21.750455 containerd[1661]: time="2025-05-16T00:12:21.750301915Z" level=error msg="Failed to destroy network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.751237 containerd[1661]: time="2025-05-16T00:12:21.751212744Z" level=error msg="encountered an error cleaning up failed sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.751284 containerd[1661]: time="2025-05-16T00:12:21.751267927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.751903 kubelet[2242]: E0516 00:12:21.751503 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:21.751903 kubelet[2242]: E0516 00:12:21.751564 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:21.751903 kubelet[2242]: E0516 00:12:21.751581 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:21.751995 kubelet[2242]: E0516 00:12:21.751623 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ghpwx" podUID="f4d71287-81a8-4c56-92d4-5d01dc562d29" May 16 00:12:21.923672 kubelet[2242]: E0516 00:12:21.923571 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:22.071067 kubelet[2242]: I0516 00:12:22.070597 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5" May 16 00:12:22.071810 containerd[1661]: time="2025-05-16T00:12:22.071515524Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:12:22.071810 containerd[1661]: time="2025-05-16T00:12:22.071713805Z" level=info msg="Ensure that sandbox 5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5 in task-service has been cleanup successfully" May 16 00:12:22.072539 containerd[1661]: time="2025-05-16T00:12:22.072423357Z" level=info msg="TearDown network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" successfully" May 16 00:12:22.072539 containerd[1661]: time="2025-05-16T00:12:22.072440910Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" returns successfully" May 16 00:12:22.075568 systemd[1]: run-netns-cni\x2d68ef4e50\x2d1a89\x2deb5e\x2d2b76\x2d93c9342b7ca7.mount: Deactivated successfully. May 16 00:12:22.077659 containerd[1661]: time="2025-05-16T00:12:22.077550498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:1,}" May 16 00:12:22.079086 kubelet[2242]: I0516 00:12:22.079060 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce" May 16 00:12:22.080400 containerd[1661]: time="2025-05-16T00:12:22.079446376Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:12:22.080400 containerd[1661]: time="2025-05-16T00:12:22.079613000Z" level=info msg="Ensure that sandbox f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce in task-service has been cleanup successfully" May 16 00:12:22.082385 containerd[1661]: time="2025-05-16T00:12:22.080502699Z" level=info msg="TearDown network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" successfully" May 16 00:12:22.082385 containerd[1661]: time="2025-05-16T00:12:22.080518208Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" returns successfully" May 16 00:12:22.082219 systemd[1]: run-netns-cni\x2d6e2ca062\x2dbd21\x2d8ea0\x2dfb85\x2d8cda0ade3506.mount: Deactivated successfully. May 16 00:12:22.083720 containerd[1661]: time="2025-05-16T00:12:22.082750858Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:12:22.083720 containerd[1661]: time="2025-05-16T00:12:22.082834045Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:12:22.083720 containerd[1661]: time="2025-05-16T00:12:22.082842991Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:12:22.083992 containerd[1661]: time="2025-05-16T00:12:22.083976789Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:12:22.085049 containerd[1661]: time="2025-05-16T00:12:22.084475415Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:12:22.085049 containerd[1661]: time="2025-05-16T00:12:22.084519638Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:12:22.085628 containerd[1661]: time="2025-05-16T00:12:22.085607960Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:22.086242 containerd[1661]: time="2025-05-16T00:12:22.085662142Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:22.086407 containerd[1661]: time="2025-05-16T00:12:22.086272237Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:22.087052 containerd[1661]: time="2025-05-16T00:12:22.087028787Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:22.087300 containerd[1661]: time="2025-05-16T00:12:22.087229153Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:22.087300 containerd[1661]: time="2025-05-16T00:12:22.087246615Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:22.087659 containerd[1661]: time="2025-05-16T00:12:22.087588166Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:22.087659 containerd[1661]: time="2025-05-16T00:12:22.087641216Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:22.087659 containerd[1661]: time="2025-05-16T00:12:22.087648920Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:22.088058 containerd[1661]: time="2025-05-16T00:12:22.088039514Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:22.088155 containerd[1661]: time="2025-05-16T00:12:22.088097112Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:22.088155 containerd[1661]: time="2025-05-16T00:12:22.088106920Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:22.089066 containerd[1661]: time="2025-05-16T00:12:22.089028510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:7,}" May 16 00:12:22.176615 containerd[1661]: time="2025-05-16T00:12:22.176557923Z" level=error msg="Failed to destroy network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.178188 containerd[1661]: time="2025-05-16T00:12:22.177489892Z" level=error msg="encountered an error cleaning up failed sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.178188 containerd[1661]: time="2025-05-16T00:12:22.177571726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.178262 kubelet[2242]: E0516 00:12:22.177780 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.178262 kubelet[2242]: E0516 00:12:22.177842 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:22.178262 kubelet[2242]: E0516 00:12:22.177867 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:22.178418 kubelet[2242]: E0516 00:12:22.177916 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:22.178546 containerd[1661]: time="2025-05-16T00:12:22.178325591Z" level=error msg="Failed to destroy network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.192320 containerd[1661]: time="2025-05-16T00:12:22.178582504Z" level=error msg="encountered an error cleaning up failed sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.192320 containerd[1661]: time="2025-05-16T00:12:22.178622429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.192476 kubelet[2242]: E0516 00:12:22.178874 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:22.192476 kubelet[2242]: E0516 00:12:22.178923 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:22.192476 kubelet[2242]: E0516 00:12:22.178945 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:22.192820 kubelet[2242]: E0516 00:12:22.178987 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ghpwx" podUID="f4d71287-81a8-4c56-92d4-5d01dc562d29" May 16 00:12:22.905750 kubelet[2242]: E0516 00:12:22.905712 2242 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:22.925293 kubelet[2242]: E0516 00:12:22.924379 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:23.033921 containerd[1661]: time="2025-05-16T00:12:23.033864904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:23.035916 containerd[1661]: time="2025-05-16T00:12:23.035903029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 00:12:23.038592 containerd[1661]: time="2025-05-16T00:12:23.038561097Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:23.041054 containerd[1661]: time="2025-05-16T00:12:23.040985407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:23.041547 containerd[1661]: time="2025-05-16T00:12:23.041442324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 8.003968948s" May 16 00:12:23.041547 containerd[1661]: time="2025-05-16T00:12:23.041464676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 00:12:23.057881 containerd[1661]: time="2025-05-16T00:12:23.057840709Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 00:12:23.073915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651-shm.mount: Deactivated successfully. May 16 00:12:23.074113 containerd[1661]: time="2025-05-16T00:12:23.074050219Z" level=info msg="CreateContainer within sandbox \"a796221560ca0af15ab00617847484c1ffc9ec812fbc2a3b04525afa7ce195e7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8140c837057a8b6665935281e216df68c9cdc3240d68352c7d5bf20513b116d7\"" May 16 00:12:23.074633 containerd[1661]: time="2025-05-16T00:12:23.074461029Z" level=info msg="StartContainer for \"8140c837057a8b6665935281e216df68c9cdc3240d68352c7d5bf20513b116d7\"" May 16 00:12:23.074866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638-shm.mount: Deactivated successfully. May 16 00:12:23.075136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228410952.mount: Deactivated successfully. May 16 00:12:23.088525 kubelet[2242]: I0516 00:12:23.087928 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651" May 16 00:12:23.088747 containerd[1661]: time="2025-05-16T00:12:23.088721712Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" May 16 00:12:23.089202 containerd[1661]: time="2025-05-16T00:12:23.089187215Z" level=info msg="Ensure that sandbox c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651 in task-service has been cleanup successfully" May 16 00:12:23.091503 containerd[1661]: time="2025-05-16T00:12:23.091489677Z" level=info msg="TearDown network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" successfully" May 16 00:12:23.091561 containerd[1661]: time="2025-05-16T00:12:23.091551673Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" returns successfully" May 16 00:12:23.091976 kubelet[2242]: I0516 00:12:23.091968 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638" May 16 00:12:23.092133 containerd[1661]: time="2025-05-16T00:12:23.092117945Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:12:23.092793 containerd[1661]: time="2025-05-16T00:12:23.092517235Z" level=info msg="TearDown network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" successfully" May 16 00:12:23.092852 containerd[1661]: time="2025-05-16T00:12:23.092841313Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" returns successfully" May 16 00:12:23.092911 containerd[1661]: time="2025-05-16T00:12:23.092458394Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" May 16 00:12:23.093325 containerd[1661]: time="2025-05-16T00:12:23.093311545Z" level=info msg="Ensure that sandbox dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638 in task-service has been cleanup successfully" May 16 00:12:23.094483 containerd[1661]: time="2025-05-16T00:12:23.093472738Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:12:23.094483 containerd[1661]: time="2025-05-16T00:12:23.094336169Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:12:23.094483 containerd[1661]: time="2025-05-16T00:12:23.094377416Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:12:23.094713 containerd[1661]: time="2025-05-16T00:12:23.094613178Z" level=info msg="TearDown network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" successfully" May 16 00:12:23.094713 containerd[1661]: time="2025-05-16T00:12:23.094625271Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" returns successfully" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095301630Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095373876Z" level=info msg="TearDown network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" successfully" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095382993Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" returns successfully" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095477711Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095521563Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:12:23.096399 containerd[1661]: time="2025-05-16T00:12:23.095528566Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:12:23.095709 systemd[1]: run-netns-cni\x2d2dfb9694\x2dee4e\x2dcab9\x2dd7fa\x2d9536a09a61ff.mount: Deactivated successfully. May 16 00:12:23.097577 containerd[1661]: time="2025-05-16T00:12:23.097563004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:2,}" May 16 00:12:23.098027 containerd[1661]: time="2025-05-16T00:12:23.098012829Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:23.098213 containerd[1661]: time="2025-05-16T00:12:23.098112305Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:23.098213 containerd[1661]: time="2025-05-16T00:12:23.098123156Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:23.098469 containerd[1661]: time="2025-05-16T00:12:23.098368476Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:23.098469 containerd[1661]: time="2025-05-16T00:12:23.098428729Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:23.098469 containerd[1661]: time="2025-05-16T00:12:23.098436924Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:23.098837 containerd[1661]: time="2025-05-16T00:12:23.098671834Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:23.098837 containerd[1661]: time="2025-05-16T00:12:23.098722319Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:23.098837 containerd[1661]: time="2025-05-16T00:12:23.098729533Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:23.099204 containerd[1661]: time="2025-05-16T00:12:23.098995974Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:23.099204 containerd[1661]: time="2025-05-16T00:12:23.099044615Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:23.099204 containerd[1661]: time="2025-05-16T00:12:23.099051307Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:23.099492 containerd[1661]: time="2025-05-16T00:12:23.099477757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:8,}" May 16 00:12:23.100881 systemd[1]: run-netns-cni\x2dcfbb01dd\x2db56a\x2dbff7\x2d67a2\x2d2c4a84ce88f2.mount: Deactivated successfully. May 16 00:12:23.204475 containerd[1661]: time="2025-05-16T00:12:23.204382749Z" level=error msg="Failed to destroy network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.204845 containerd[1661]: time="2025-05-16T00:12:23.204825951Z" level=error msg="encountered an error cleaning up failed sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.204952 containerd[1661]: time="2025-05-16T00:12:23.204937050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.205257 kubelet[2242]: E0516 00:12:23.205214 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.205308 kubelet[2242]: E0516 00:12:23.205294 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:23.205369 kubelet[2242]: E0516 00:12:23.205322 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kvvxm" May 16 00:12:23.207067 kubelet[2242]: E0516 00:12:23.207017 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kvvxm_calico-system(0fb2e616-5516-4e0a-a947-fe8eddf1e618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kvvxm" podUID="0fb2e616-5516-4e0a-a947-fe8eddf1e618" May 16 00:12:23.209298 containerd[1661]: time="2025-05-16T00:12:23.209273657Z" level=info msg="StartContainer for \"8140c837057a8b6665935281e216df68c9cdc3240d68352c7d5bf20513b116d7\" returns successfully" May 16 00:12:23.229860 containerd[1661]: time="2025-05-16T00:12:23.229797885Z" level=error msg="Failed to destroy network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.230412 containerd[1661]: time="2025-05-16T00:12:23.230309495Z" level=error msg="encountered an error cleaning up failed sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.230465 containerd[1661]: time="2025-05-16T00:12:23.230393042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.230831 kubelet[2242]: E0516 00:12:23.230802 2242 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:12:23.231225 kubelet[2242]: E0516 00:12:23.230924 2242 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:23.231225 kubelet[2242]: E0516 00:12:23.230947 2242 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghpwx" May 16 00:12:23.231225 kubelet[2242]: E0516 00:12:23.231007 2242 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ghpwx_default(f4d71287-81a8-4c56-92d4-5d01dc562d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ghpwx" podUID="f4d71287-81a8-4c56-92d4-5d01dc562d29" May 16 00:12:23.281769 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 00:12:23.281920 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 00:12:23.924698 kubelet[2242]: E0516 00:12:23.924621 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:24.076682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606-shm.mount: Deactivated successfully. May 16 00:12:24.102000 kubelet[2242]: I0516 00:12:24.101955 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620" May 16 00:12:24.102900 containerd[1661]: time="2025-05-16T00:12:24.102802191Z" level=info msg="StopPodSandbox for \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\"" May 16 00:12:24.103270 containerd[1661]: time="2025-05-16T00:12:24.103250482Z" level=info msg="Ensure that sandbox eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620 in task-service has been cleanup successfully" May 16 00:12:24.106446 containerd[1661]: time="2025-05-16T00:12:24.105564385Z" level=info msg="TearDown network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" successfully" May 16 00:12:24.106446 containerd[1661]: time="2025-05-16T00:12:24.105610321Z" level=info msg="StopPodSandbox for \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" returns successfully" May 16 00:12:24.106446 containerd[1661]: time="2025-05-16T00:12:24.106234973Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" May 16 00:12:24.106446 containerd[1661]: time="2025-05-16T00:12:24.106414200Z" level=info msg="TearDown network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" successfully" May 16 00:12:24.106446 containerd[1661]: time="2025-05-16T00:12:24.106433025Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" returns successfully" May 16 00:12:24.109288 containerd[1661]: time="2025-05-16T00:12:24.108765733Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:12:24.109288 containerd[1661]: time="2025-05-16T00:12:24.109165773Z" level=info msg="TearDown network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" successfully" May 16 00:12:24.109288 containerd[1661]: time="2025-05-16T00:12:24.109216378Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" returns successfully" May 16 00:12:24.109523 systemd[1]: run-netns-cni\x2de91931f4\x2d7de6\x2d8615\x2daba0\x2dfd5d76ca33b6.mount: Deactivated successfully. May 16 00:12:24.112788 containerd[1661]: time="2025-05-16T00:12:24.112284145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:3,}" May 16 00:12:24.126139 kubelet[2242]: I0516 00:12:24.126106 2242 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606" May 16 00:12:24.127634 containerd[1661]: time="2025-05-16T00:12:24.127323520Z" level=info msg="StopPodSandbox for \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\"" May 16 00:12:24.128288 containerd[1661]: time="2025-05-16T00:12:24.128215553Z" level=info msg="Ensure that sandbox fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606 in task-service has been cleanup successfully" May 16 00:12:24.134015 containerd[1661]: time="2025-05-16T00:12:24.131555952Z" level=info msg="TearDown network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" successfully" May 16 00:12:24.134015 containerd[1661]: time="2025-05-16T00:12:24.131607368Z" level=info msg="StopPodSandbox for \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" returns successfully" May 16 00:12:24.134669 containerd[1661]: time="2025-05-16T00:12:24.134610323Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" May 16 00:12:24.135339 systemd[1]: run-netns-cni\x2d8c7d862e\x2d6ddb\x2dcbaa\x2dd74e\x2d35ee43c3a618.mount: Deactivated successfully. May 16 00:12:24.137473 containerd[1661]: time="2025-05-16T00:12:24.136663216Z" level=info msg="TearDown network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" successfully" May 16 00:12:24.137473 containerd[1661]: time="2025-05-16T00:12:24.137321992Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" returns successfully" May 16 00:12:24.138486 containerd[1661]: time="2025-05-16T00:12:24.138099251Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:12:24.138486 containerd[1661]: time="2025-05-16T00:12:24.138214297Z" level=info msg="TearDown network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" successfully" May 16 00:12:24.138486 containerd[1661]: time="2025-05-16T00:12:24.138230678Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" returns successfully" May 16 00:12:24.139035 containerd[1661]: time="2025-05-16T00:12:24.138991115Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:12:24.139616 containerd[1661]: time="2025-05-16T00:12:24.139540876Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:12:24.139616 containerd[1661]: time="2025-05-16T00:12:24.139566465Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:12:24.142904 containerd[1661]: time="2025-05-16T00:12:24.142863442Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:12:24.143269 containerd[1661]: time="2025-05-16T00:12:24.143217897Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:12:24.143436 containerd[1661]: time="2025-05-16T00:12:24.143415278Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:12:24.148660 containerd[1661]: time="2025-05-16T00:12:24.148606790Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:12:24.150232 containerd[1661]: time="2025-05-16T00:12:24.149208079Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:12:24.150556 containerd[1661]: time="2025-05-16T00:12:24.150488812Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:12:24.151858 containerd[1661]: time="2025-05-16T00:12:24.151797388Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:12:24.152293 containerd[1661]: time="2025-05-16T00:12:24.152230501Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:12:24.152293 containerd[1661]: time="2025-05-16T00:12:24.152255428Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:12:24.152940 containerd[1661]: time="2025-05-16T00:12:24.152763671Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:12:24.152940 containerd[1661]: time="2025-05-16T00:12:24.152868218Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:12:24.152940 containerd[1661]: time="2025-05-16T00:12:24.152882405Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:12:24.153885 containerd[1661]: time="2025-05-16T00:12:24.153436133Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:12:24.153885 containerd[1661]: time="2025-05-16T00:12:24.153535399Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:12:24.153885 containerd[1661]: time="2025-05-16T00:12:24.153548634Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:12:24.157606 containerd[1661]: time="2025-05-16T00:12:24.157563538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:9,}" May 16 00:12:24.370152 systemd-networkd[1265]: calie5caab23ed7: Link UP May 16 00:12:24.370326 systemd-networkd[1265]: calie5caab23ed7: Gained carrier May 16 00:12:24.387127 kubelet[2242]: I0516 00:12:24.387015 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8rdzq" podStartSLOduration=4.411601849 podStartE2EDuration="21.38699327s" podCreationTimestamp="2025-05-16 00:12:03 +0000 UTC" firstStartedPulling="2025-05-16 00:12:06.067135029 +0000 UTC m=+3.448360033" lastFinishedPulling="2025-05-16 00:12:23.042526448 +0000 UTC m=+20.423751454" observedRunningTime="2025-05-16 00:12:24.136209204 +0000 UTC m=+21.517434290" watchObservedRunningTime="2025-05-16 00:12:24.38699327 +0000 UTC m=+21.768218274" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.229 [INFO][3127] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.272 [INFO][3127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--kvvxm-eth0 csi-node-driver- calico-system 0fb2e616-5516-4e0a-a947-fe8eddf1e618 1770 0 2025-05-16 00:12:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-kvvxm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie5caab23ed7 [] [] }} ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.272 [INFO][3127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.312 [INFO][3148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" HandleID="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Workload="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.312 [INFO][3148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" HandleID="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Workload="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-kvvxm", "timestamp":"2025-05-16 00:12:24.312466649 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.312 [INFO][3148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.312 [INFO][3148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.313 [INFO][3148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.321 [INFO][3148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.327 [INFO][3148] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.334 [INFO][3148] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.337 [INFO][3148] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.341 [INFO][3148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.341 [INFO][3148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.343 [INFO][3148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.347 [INFO][3148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.354 [INFO][3148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.354 [INFO][3148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" host="10.0.0.4" May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.354 [INFO][3148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:12:24.388145 containerd[1661]: 2025-05-16 00:12:24.355 [INFO][3148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" HandleID="k8s-pod-network.6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Workload="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.358 [INFO][3127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--kvvxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fb2e616-5516-4e0a-a947-fe8eddf1e618", ResourceVersion:"1770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-kvvxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5caab23ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.359 [INFO][3127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.193/32] ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.359 [INFO][3127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5caab23ed7 ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.372 [INFO][3127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.373 [INFO][3127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--kvvxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fb2e616-5516-4e0a-a947-fe8eddf1e618", ResourceVersion:"1770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d", Pod:"csi-node-driver-kvvxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5caab23ed7", MAC:"2e:8e:c0:69:4f:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:24.388865 containerd[1661]: 2025-05-16 00:12:24.386 [INFO][3127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d" Namespace="calico-system" Pod="csi-node-driver-kvvxm" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--kvvxm-eth0" May 16 00:12:24.404291 containerd[1661]: time="2025-05-16T00:12:24.403970039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:24.404291 containerd[1661]: time="2025-05-16T00:12:24.404027066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:24.404291 containerd[1661]: time="2025-05-16T00:12:24.404040431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:24.404291 containerd[1661]: time="2025-05-16T00:12:24.404116454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:24.438434 containerd[1661]: time="2025-05-16T00:12:24.438371529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kvvxm,Uid:0fb2e616-5516-4e0a-a947-fe8eddf1e618,Namespace:calico-system,Attempt:9,} returns sandbox id \"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d\"" May 16 00:12:24.440389 containerd[1661]: time="2025-05-16T00:12:24.440275632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 00:12:24.462087 systemd-networkd[1265]: cali305dddac159: Link UP May 16 00:12:24.462800 systemd-networkd[1265]: cali305dddac159: Gained carrier May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.239 [INFO][3108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.272 [INFO][3108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0 nginx-deployment-8587fbcb89- default f4d71287-81a8-4c56-92d4-5d01dc562d29 1858 0 2025-05-16 00:12:21 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-8587fbcb89-ghpwx eth0 default [] [] [kns.default ksa.default.default] cali305dddac159 [] [] }} ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.272 [INFO][3108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.312 [INFO][3146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" HandleID="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.313 [INFO][3146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" HandleID="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000232fa0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-8587fbcb89-ghpwx", "timestamp":"2025-05-16 00:12:24.312831112 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.313 [INFO][3146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.355 [INFO][3146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.355 [INFO][3146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.422 [INFO][3146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.429 [INFO][3146] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.436 [INFO][3146] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.439 [INFO][3146] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.442 [INFO][3146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.442 [INFO][3146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.445 [INFO][3146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859 May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.450 [INFO][3146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.457 [INFO][3146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.457 [INFO][3146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" host="10.0.0.4" May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.457 [INFO][3146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:12:24.472014 containerd[1661]: 2025-05-16 00:12:24.457 [INFO][3146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" HandleID="k8s-pod-network.079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.459 [INFO][3108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f4d71287-81a8-4c56-92d4-5d01dc562d29", ResourceVersion:"1858", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-ghpwx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali305dddac159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.459 [INFO][3108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.194/32] ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.459 [INFO][3108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali305dddac159 ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.463 [INFO][3108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.463 [INFO][3108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f4d71287-81a8-4c56-92d4-5d01dc562d29", ResourceVersion:"1858", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859", Pod:"nginx-deployment-8587fbcb89-ghpwx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali305dddac159", MAC:"6a:76:56:d0:06:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:24.473031 containerd[1661]: 2025-05-16 00:12:24.470 [INFO][3108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghpwx" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--ghpwx-eth0" May 16 00:12:24.489942 containerd[1661]: time="2025-05-16T00:12:24.489804259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:24.489942 containerd[1661]: time="2025-05-16T00:12:24.489868239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:24.489942 containerd[1661]: time="2025-05-16T00:12:24.489888277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:24.490236 containerd[1661]: time="2025-05-16T00:12:24.489985900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:24.570906 containerd[1661]: time="2025-05-16T00:12:24.570846869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghpwx,Uid:f4d71287-81a8-4c56-92d4-5d01dc562d29,Namespace:default,Attempt:3,} returns sandbox id \"079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859\"" May 16 00:12:24.842409 kernel: bpftool[3373]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 00:12:24.925117 kubelet[2242]: E0516 00:12:24.925028 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:25.079493 systemd-networkd[1265]: vxlan.calico: Link UP May 16 00:12:25.079503 systemd-networkd[1265]: vxlan.calico: Gained carrier May 16 00:12:25.925514 kubelet[2242]: E0516 00:12:25.925408 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:26.000724 systemd-networkd[1265]: cali305dddac159: Gained IPv6LL May 16 00:12:26.256817 systemd-networkd[1265]: calie5caab23ed7: Gained IPv6LL May 16 00:12:26.450568 containerd[1661]: time="2025-05-16T00:12:26.450501576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:26.451607 containerd[1661]: time="2025-05-16T00:12:26.451377589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 16 00:12:26.452705 containerd[1661]: time="2025-05-16T00:12:26.452683471Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:26.456407 containerd[1661]: time="2025-05-16T00:12:26.456313362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:26.457248 containerd[1661]: time="2025-05-16T00:12:26.457108875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.016759976s" May 16 00:12:26.457248 containerd[1661]: time="2025-05-16T00:12:26.457141376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 16 00:12:26.458948 containerd[1661]: time="2025-05-16T00:12:26.458915145Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:12:26.459739 containerd[1661]: time="2025-05-16T00:12:26.459704867Z" level=info msg="CreateContainer within sandbox \"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 00:12:26.477554 containerd[1661]: time="2025-05-16T00:12:26.477482659Z" level=info msg="CreateContainer within sandbox \"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"96dae70314c7a138e9b6c2bb2cecb2a3c5d15cbf767201dbdbafba7125ef03b8\"" May 16 00:12:26.478271 containerd[1661]: time="2025-05-16T00:12:26.478135996Z" level=info msg="StartContainer for \"96dae70314c7a138e9b6c2bb2cecb2a3c5d15cbf767201dbdbafba7125ef03b8\"" May 16 00:12:26.548239 containerd[1661]: time="2025-05-16T00:12:26.548181675Z" level=info msg="StartContainer for \"96dae70314c7a138e9b6c2bb2cecb2a3c5d15cbf767201dbdbafba7125ef03b8\" returns successfully" May 16 00:12:26.925975 kubelet[2242]: E0516 00:12:26.925811 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:27.090278 systemd-networkd[1265]: vxlan.calico: Gained IPv6LL May 16 00:12:27.926658 kubelet[2242]: E0516 00:12:27.926583 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:28.927759 kubelet[2242]: E0516 00:12:28.927692 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:29.139866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704562236.mount: Deactivated successfully. May 16 00:12:29.928178 kubelet[2242]: E0516 00:12:29.928122 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:30.033315 containerd[1661]: time="2025-05-16T00:12:30.033203292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:30.035055 containerd[1661]: time="2025-05-16T00:12:30.034988451Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 16 00:12:30.036786 containerd[1661]: time="2025-05-16T00:12:30.036717807Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:30.040692 containerd[1661]: time="2025-05-16T00:12:30.040624047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:30.042391 containerd[1661]: time="2025-05-16T00:12:30.042187291Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 3.58323129s" May 16 00:12:30.042391 containerd[1661]: time="2025-05-16T00:12:30.042224631Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 16 00:12:30.044641 containerd[1661]: time="2025-05-16T00:12:30.044601942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 00:12:30.045424 containerd[1661]: time="2025-05-16T00:12:30.045276778Z" level=info msg="CreateContainer within sandbox \"079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 00:12:30.062405 containerd[1661]: time="2025-05-16T00:12:30.062292319Z" level=info msg="CreateContainer within sandbox \"079c11221977b76d42e4eed9918466126ac267a0f306d5fdbcc2eaad00e79859\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cf36eed3cab4f996ca81fd917a9bb586f751e7a042c8a691b1d65a9b15ddf22c\"" May 16 00:12:30.064279 containerd[1661]: time="2025-05-16T00:12:30.063954178Z" level=info msg="StartContainer for \"cf36eed3cab4f996ca81fd917a9bb586f751e7a042c8a691b1d65a9b15ddf22c\"" May 16 00:12:30.130126 containerd[1661]: time="2025-05-16T00:12:30.129937451Z" level=info msg="StartContainer for \"cf36eed3cab4f996ca81fd917a9bb586f751e7a042c8a691b1d65a9b15ddf22c\" returns successfully" May 16 00:12:30.166861 kubelet[2242]: I0516 00:12:30.166054 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ghpwx" podStartSLOduration=3.69487009 podStartE2EDuration="9.166027167s" podCreationTimestamp="2025-05-16 00:12:21 +0000 UTC" firstStartedPulling="2025-05-16 00:12:24.572775258 +0000 UTC m=+21.954000283" lastFinishedPulling="2025-05-16 00:12:30.043932335 +0000 UTC m=+27.425157360" observedRunningTime="2025-05-16 00:12:30.165754616 +0000 UTC m=+27.546979620" watchObservedRunningTime="2025-05-16 00:12:30.166027167 +0000 UTC m=+27.547252192" May 16 00:12:30.929034 kubelet[2242]: E0516 00:12:30.928954 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:31.930154 kubelet[2242]: E0516 00:12:31.930082 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:32.222191 containerd[1661]: time="2025-05-16T00:12:32.222061588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:32.223192 containerd[1661]: time="2025-05-16T00:12:32.223133029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 16 00:12:32.224506 containerd[1661]: time="2025-05-16T00:12:32.224442996Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:32.226227 containerd[1661]: time="2025-05-16T00:12:32.226175337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:32.226843 containerd[1661]: time="2025-05-16T00:12:32.226697678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.182060279s" May 16 00:12:32.226843 containerd[1661]: time="2025-05-16T00:12:32.226722253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 16 00:12:32.229104 containerd[1661]: time="2025-05-16T00:12:32.229072253Z" level=info msg="CreateContainer within sandbox \"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 00:12:32.252697 containerd[1661]: time="2025-05-16T00:12:32.252634069Z" level=info msg="CreateContainer within sandbox \"6ca321b96b2ea0d59ca11b986a99a298dde3937d2234f5a72dfab6ec07c4c87d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7894515c3364e44c8d71a5686a1ac44b3becf0c023b0abc66273ee393a226f8e\"" May 16 00:12:32.253333 containerd[1661]: time="2025-05-16T00:12:32.253301451Z" level=info msg="StartContainer for \"7894515c3364e44c8d71a5686a1ac44b3becf0c023b0abc66273ee393a226f8e\"" May 16 00:12:32.326302 containerd[1661]: time="2025-05-16T00:12:32.326237680Z" level=info msg="StartContainer for \"7894515c3364e44c8d71a5686a1ac44b3becf0c023b0abc66273ee393a226f8e\" returns successfully" May 16 00:12:32.931258 kubelet[2242]: E0516 00:12:32.931170 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:33.021613 kubelet[2242]: I0516 00:12:33.021563 2242 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 00:12:33.021613 kubelet[2242]: I0516 00:12:33.021607 2242 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 00:12:33.181718 kubelet[2242]: I0516 00:12:33.181523 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kvvxm" podStartSLOduration=22.393521929 podStartE2EDuration="30.181496498s" podCreationTimestamp="2025-05-16 00:12:03 +0000 UTC" firstStartedPulling="2025-05-16 00:12:24.439852748 +0000 UTC m=+21.821077763" lastFinishedPulling="2025-05-16 00:12:32.227827327 +0000 UTC m=+29.609052332" observedRunningTime="2025-05-16 00:12:33.181488413 +0000 UTC m=+30.562713458" watchObservedRunningTime="2025-05-16 00:12:33.181496498 +0000 UTC m=+30.562721543" May 16 00:12:33.932403 kubelet[2242]: E0516 00:12:33.932263 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:34.932596 kubelet[2242]: E0516 00:12:34.932507 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:35.933335 kubelet[2242]: E0516 00:12:35.933275 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:36.934275 kubelet[2242]: E0516 00:12:36.934162 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:37.934822 kubelet[2242]: E0516 00:12:37.934726 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:38.935222 kubelet[2242]: E0516 00:12:38.935127 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:39.935736 kubelet[2242]: E0516 00:12:39.935643 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:40.936306 kubelet[2242]: E0516 00:12:40.936219 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:41.936993 kubelet[2242]: E0516 00:12:41.936900 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:42.906394 kubelet[2242]: E0516 00:12:42.906287 2242 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:42.937787 kubelet[2242]: E0516 00:12:42.937727 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:43.118672 kubelet[2242]: I0516 00:12:43.118626 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/bd52b0a8-70a1-4212-8b62-4a0175d4bf47-data\") pod \"nfs-server-provisioner-0\" (UID: \"bd52b0a8-70a1-4212-8b62-4a0175d4bf47\") " pod="default/nfs-server-provisioner-0" May 16 00:12:43.118834 kubelet[2242]: I0516 00:12:43.118744 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7zm\" (UniqueName: \"kubernetes.io/projected/bd52b0a8-70a1-4212-8b62-4a0175d4bf47-kube-api-access-zr7zm\") pod \"nfs-server-provisioner-0\" (UID: \"bd52b0a8-70a1-4212-8b62-4a0175d4bf47\") " pod="default/nfs-server-provisioner-0" May 16 00:12:43.304721 containerd[1661]: time="2025-05-16T00:12:43.304660577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bd52b0a8-70a1-4212-8b62-4a0175d4bf47,Namespace:default,Attempt:0,}" May 16 00:12:43.465994 systemd-networkd[1265]: cali60e51b789ff: Link UP May 16 00:12:43.466268 systemd-networkd[1265]: cali60e51b789ff: Gained carrier May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.368 [INFO][3652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default bd52b0a8-70a1-4212-8b62-4a0175d4bf47 1966 0 2025-05-16 00:12:42 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.369 [INFO][3652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.412 [INFO][3663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" HandleID="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.413 [INFO][3663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" HandleID="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333830), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-16 00:12:43.412878195 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.413 [INFO][3663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.413 [INFO][3663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.413 [INFO][3663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.422 [INFO][3663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.431 [INFO][3663] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.437 [INFO][3663] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.440 [INFO][3663] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.442 [INFO][3663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.443 [INFO][3663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.445 [INFO][3663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9 May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.451 [INFO][3663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.458 [INFO][3663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.458 [INFO][3663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" host="10.0.0.4" May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.459 [INFO][3663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:12:43.482110 containerd[1661]: 2025-05-16 00:12:43.459 [INFO][3663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" HandleID="k8s-pod-network.2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.485691 containerd[1661]: 2025-05-16 00:12:43.461 [INFO][3652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"bd52b0a8-70a1-4212-8b62-4a0175d4bf47", ResourceVersion:"1966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:43.485691 containerd[1661]: 2025-05-16 00:12:43.462 [INFO][3652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.195/32] ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.485691 containerd[1661]: 2025-05-16 00:12:43.462 [INFO][3652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.485691 containerd[1661]: 2025-05-16 00:12:43.465 [INFO][3652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.485972 containerd[1661]: 2025-05-16 00:12:43.465 [INFO][3652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"bd52b0a8-70a1-4212-8b62-4a0175d4bf47", ResourceVersion:"1966", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:78:f5:ee:c5:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:43.485972 containerd[1661]: 2025-05-16 00:12:43.475 [INFO][3652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" May 16 00:12:43.519882 containerd[1661]: time="2025-05-16T00:12:43.519569339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:43.519882 containerd[1661]: time="2025-05-16T00:12:43.519636404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:43.519882 containerd[1661]: time="2025-05-16T00:12:43.519652475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:43.519882 containerd[1661]: time="2025-05-16T00:12:43.519766889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:43.581707 containerd[1661]: time="2025-05-16T00:12:43.581657816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bd52b0a8-70a1-4212-8b62-4a0175d4bf47,Namespace:default,Attempt:0,} returns sandbox id \"2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9\"" May 16 00:12:43.583155 containerd[1661]: time="2025-05-16T00:12:43.583139477Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 00:12:43.938269 kubelet[2242]: E0516 00:12:43.938056 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:44.938636 kubelet[2242]: E0516 00:12:44.938576 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:45.347555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774221986.mount: Deactivated successfully. May 16 00:12:45.522338 systemd-networkd[1265]: cali60e51b789ff: Gained IPv6LL May 16 00:12:45.939003 kubelet[2242]: E0516 00:12:45.938922 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:46.770672 containerd[1661]: time="2025-05-16T00:12:46.770614262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:46.771909 containerd[1661]: time="2025-05-16T00:12:46.771867574Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039476" May 16 00:12:46.773220 containerd[1661]: time="2025-05-16T00:12:46.773179966Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:46.775978 containerd[1661]: time="2025-05-16T00:12:46.775866398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:46.777094 containerd[1661]: time="2025-05-16T00:12:46.776675776Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.193402468s" May 16 00:12:46.777094 containerd[1661]: time="2025-05-16T00:12:46.776720269Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 16 00:12:46.779838 containerd[1661]: time="2025-05-16T00:12:46.779797845Z" level=info msg="CreateContainer within sandbox \"2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 00:12:46.793261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647982018.mount: Deactivated successfully. May 16 00:12:46.801711 containerd[1661]: time="2025-05-16T00:12:46.801663866Z" level=info msg="CreateContainer within sandbox \"2a15958fe7a50e5aec0527cc6a1469fae06492c43d0baf468937849b4edaa2b9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9c8dffb6dac0e7e1c9f3ab3fc4d8a66c9e981557e8ad6cbc03d9dc785cf3103c\"" May 16 00:12:46.802308 containerd[1661]: time="2025-05-16T00:12:46.802249033Z" level=info msg="StartContainer for \"9c8dffb6dac0e7e1c9f3ab3fc4d8a66c9e981557e8ad6cbc03d9dc785cf3103c\"" May 16 00:12:46.854136 containerd[1661]: time="2025-05-16T00:12:46.854087506Z" level=info msg="StartContainer for \"9c8dffb6dac0e7e1c9f3ab3fc4d8a66c9e981557e8ad6cbc03d9dc785cf3103c\" returns successfully" May 16 00:12:46.940626 kubelet[2242]: E0516 00:12:46.940551 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:47.216391 kubelet[2242]: I0516 00:12:47.216284 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.020924048 podStartE2EDuration="5.216261728s" podCreationTimestamp="2025-05-16 00:12:42 +0000 UTC" firstStartedPulling="2025-05-16 00:12:43.582639568 +0000 UTC m=+40.963864573" lastFinishedPulling="2025-05-16 00:12:46.777977248 +0000 UTC m=+44.159202253" observedRunningTime="2025-05-16 00:12:47.215245702 +0000 UTC m=+44.596470747" watchObservedRunningTime="2025-05-16 00:12:47.216261728 +0000 UTC m=+44.597486774" May 16 00:12:47.941216 kubelet[2242]: E0516 00:12:47.941117 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:48.941739 kubelet[2242]: E0516 00:12:48.941668 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:49.942189 kubelet[2242]: E0516 00:12:49.942070 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:50.942706 kubelet[2242]: E0516 00:12:50.942583 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:51.943200 kubelet[2242]: E0516 00:12:51.943122 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:52.943645 kubelet[2242]: E0516 00:12:52.943573 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:53.944941 kubelet[2242]: E0516 00:12:53.944784 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:54.945651 kubelet[2242]: E0516 00:12:54.945556 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:55.945872 kubelet[2242]: E0516 00:12:55.945776 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:56.920871 kubelet[2242]: I0516 00:12:56.920767 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f84d52f0-f20a-4c1a-870c-db04d1dd1525\" (UniqueName: \"kubernetes.io/nfs/cf9e1e00-f4d5-42e2-9a47-f7f529480d73-pvc-f84d52f0-f20a-4c1a-870c-db04d1dd1525\") pod \"test-pod-1\" (UID: \"cf9e1e00-f4d5-42e2-9a47-f7f529480d73\") " pod="default/test-pod-1" May 16 00:12:56.920871 kubelet[2242]: I0516 00:12:56.920840 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7qx\" (UniqueName: \"kubernetes.io/projected/cf9e1e00-f4d5-42e2-9a47-f7f529480d73-kube-api-access-nx7qx\") pod \"test-pod-1\" (UID: \"cf9e1e00-f4d5-42e2-9a47-f7f529480d73\") " pod="default/test-pod-1" May 16 00:12:56.946656 kubelet[2242]: E0516 00:12:56.946557 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:57.080424 kernel: FS-Cache: Loaded May 16 00:12:57.159436 kernel: RPC: Registered named UNIX socket transport module. May 16 00:12:57.159547 kernel: RPC: Registered udp transport module. May 16 00:12:57.161782 kernel: RPC: Registered tcp transport module. May 16 00:12:57.163537 kernel: RPC: Registered tcp-with-tls transport module. May 16 00:12:57.165494 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 00:12:57.443760 kernel: NFS: Registering the id_resolver key type May 16 00:12:57.443903 kernel: Key type id_resolver registered May 16 00:12:57.446370 kernel: Key type id_legacy registered May 16 00:12:57.472220 nfsidmap[3869]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:12:57.474497 nfsidmap[3870]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:12:57.724042 containerd[1661]: time="2025-05-16T00:12:57.723818971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf9e1e00-f4d5-42e2-9a47-f7f529480d73,Namespace:default,Attempt:0,}" May 16 00:12:57.915742 systemd-networkd[1265]: cali5ec59c6bf6e: Link UP May 16 00:12:57.917224 systemd-networkd[1265]: cali5ec59c6bf6e: Gained carrier May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.820 [INFO][3876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default cf9e1e00-f4d5-42e2-9a47-f7f529480d73 2029 0 2025-05-16 00:12:45 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.820 [INFO][3876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.861 [INFO][3883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" HandleID="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Workload="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.862 [INFO][3883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" HandleID="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233630), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-05-16 00:12:57.861917528 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.862 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.862 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.862 [INFO][3883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.876 [INFO][3883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.883 [INFO][3883] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.888 [INFO][3883] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.890 [INFO][3883] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.893 [INFO][3883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.893 [INFO][3883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.894 [INFO][3883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.902 [INFO][3883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.908 [INFO][3883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.908 [INFO][3883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" host="10.0.0.4" May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.908 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:12:57.943236 containerd[1661]: 2025-05-16 00:12:57.908 [INFO][3883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" HandleID="k8s-pod-network.4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Workload="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.911 [INFO][3876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cf9e1e00-f4d5-42e2-9a47-f7f529480d73", ResourceVersion:"2029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.911 [INFO][3876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.196/32] ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.911 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.915 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.919 [INFO][3876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"cf9e1e00-f4d5-42e2-9a47-f7f529480d73", ResourceVersion:"2029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"52:88:95:0f:28:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:12:57.951407 containerd[1661]: 2025-05-16 00:12:57.932 [INFO][3876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" May 16 00:12:57.951788 kubelet[2242]: E0516 00:12:57.946746 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:57.973795 containerd[1661]: time="2025-05-16T00:12:57.973557941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:12:57.973795 containerd[1661]: time="2025-05-16T00:12:57.973601482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:12:57.973795 containerd[1661]: time="2025-05-16T00:12:57.973624836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:57.975588 containerd[1661]: time="2025-05-16T00:12:57.973757986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:12:58.027022 containerd[1661]: time="2025-05-16T00:12:58.026967768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf9e1e00-f4d5-42e2-9a47-f7f529480d73,Namespace:default,Attempt:0,} returns sandbox id \"4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf\"" May 16 00:12:58.028936 containerd[1661]: time="2025-05-16T00:12:58.028901687Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:12:58.506091 containerd[1661]: time="2025-05-16T00:12:58.506011263Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:12:58.507306 containerd[1661]: time="2025-05-16T00:12:58.507262431Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 16 00:12:58.517372 containerd[1661]: time="2025-05-16T00:12:58.517281050Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 488.200096ms" May 16 00:12:58.517372 containerd[1661]: time="2025-05-16T00:12:58.517327086Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 16 00:12:58.528851 containerd[1661]: time="2025-05-16T00:12:58.528790867Z" level=info msg="CreateContainer within sandbox \"4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 00:12:58.550128 containerd[1661]: time="2025-05-16T00:12:58.550057923Z" level=info msg="CreateContainer within sandbox \"4ca68b71f535fd6c37af89bdfb86f094702548ae19154feb2e519187cc431daf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"39b35ef5612a31835f5e4d2f8e762366763d23c2d2f0ff6eee102d9d98f4d306\"" May 16 00:12:58.550939 containerd[1661]: time="2025-05-16T00:12:58.550897237Z" level=info msg="StartContainer for \"39b35ef5612a31835f5e4d2f8e762366763d23c2d2f0ff6eee102d9d98f4d306\"" May 16 00:12:58.619716 containerd[1661]: time="2025-05-16T00:12:58.619661436Z" level=info msg="StartContainer for \"39b35ef5612a31835f5e4d2f8e762366763d23c2d2f0ff6eee102d9d98f4d306\" returns successfully" May 16 00:12:58.947493 kubelet[2242]: E0516 00:12:58.947391 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:12:59.153284 systemd-networkd[1265]: cali5ec59c6bf6e: Gained IPv6LL May 16 00:12:59.240704 kubelet[2242]: I0516 00:12:59.240478 2242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.741366041 podStartE2EDuration="14.240431646s" podCreationTimestamp="2025-05-16 00:12:45 +0000 UTC" firstStartedPulling="2025-05-16 00:12:58.028307231 +0000 UTC m=+55.409532236" lastFinishedPulling="2025-05-16 00:12:58.527372836 +0000 UTC m=+55.908597841" observedRunningTime="2025-05-16 00:12:59.239993854 +0000 UTC m=+56.621218919" watchObservedRunningTime="2025-05-16 00:12:59.240431646 +0000 UTC m=+56.621656681" May 16 00:12:59.948699 kubelet[2242]: E0516 00:12:59.948597 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:00.949390 kubelet[2242]: E0516 00:13:00.949260 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:01.949634 kubelet[2242]: E0516 00:13:01.949522 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:02.905646 kubelet[2242]: E0516 00:13:02.905582 2242 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:02.926513 containerd[1661]: time="2025-05-16T00:13:02.926163889Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:13:02.926513 containerd[1661]: time="2025-05-16T00:13:02.926284435Z" level=info msg="TearDown network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" successfully" May 16 00:13:02.926513 containerd[1661]: time="2025-05-16T00:13:02.926295516Z" level=info msg="StopPodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" returns successfully" May 16 00:13:02.933358 containerd[1661]: time="2025-05-16T00:13:02.933276133Z" level=info msg="RemovePodSandbox for \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:13:02.946152 containerd[1661]: time="2025-05-16T00:13:02.946088344Z" level=info msg="Forcibly stopping sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\"" May 16 00:13:02.949899 kubelet[2242]: E0516 00:13:02.949854 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:02.962277 containerd[1661]: time="2025-05-16T00:13:02.946260157Z" level=info msg="TearDown network for sandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" successfully" May 16 00:13:03.004821 containerd[1661]: time="2025-05-16T00:13:03.004748523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.004952 containerd[1661]: time="2025-05-16T00:13:03.004865021Z" level=info msg="RemovePodSandbox \"5b09b59ff412c439e6438c32ef0f31a4ad6e317a43ffbfdce0ae61292185f1d5\" returns successfully" May 16 00:13:03.005427 containerd[1661]: time="2025-05-16T00:13:03.005393684Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" May 16 00:13:03.005543 containerd[1661]: time="2025-05-16T00:13:03.005518057Z" level=info msg="TearDown network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" successfully" May 16 00:13:03.005543 containerd[1661]: time="2025-05-16T00:13:03.005536943Z" level=info msg="StopPodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" returns successfully" May 16 00:13:03.005910 containerd[1661]: time="2025-05-16T00:13:03.005842035Z" level=info msg="RemovePodSandbox for \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" May 16 00:13:03.005910 containerd[1661]: time="2025-05-16T00:13:03.005868685Z" level=info msg="Forcibly stopping sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\"" May 16 00:13:03.005987 containerd[1661]: time="2025-05-16T00:13:03.005935009Z" level=info msg="TearDown network for sandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" successfully" May 16 00:13:03.010486 containerd[1661]: time="2025-05-16T00:13:03.010420977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.010598 containerd[1661]: time="2025-05-16T00:13:03.010493813Z" level=info msg="RemovePodSandbox \"dacc81ec9eb24b8dcc7312b5b3eb29c6fc5cb6a2d861e62518b6ebdb9ffd0638\" returns successfully" May 16 00:13:03.010894 containerd[1661]: time="2025-05-16T00:13:03.010765443Z" level=info msg="StopPodSandbox for \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\"" May 16 00:13:03.010894 containerd[1661]: time="2025-05-16T00:13:03.010833671Z" level=info msg="TearDown network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" successfully" May 16 00:13:03.010894 containerd[1661]: time="2025-05-16T00:13:03.010842518Z" level=info msg="StopPodSandbox for \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" returns successfully" May 16 00:13:03.011201 containerd[1661]: time="2025-05-16T00:13:03.011165423Z" level=info msg="RemovePodSandbox for \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\"" May 16 00:13:03.011244 containerd[1661]: time="2025-05-16T00:13:03.011206601Z" level=info msg="Forcibly stopping sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\"" May 16 00:13:03.011428 containerd[1661]: time="2025-05-16T00:13:03.011330743Z" level=info msg="TearDown network for sandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" successfully" May 16 00:13:03.015634 containerd[1661]: time="2025-05-16T00:13:03.015556112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.015634 containerd[1661]: time="2025-05-16T00:13:03.015630532Z" level=info msg="RemovePodSandbox \"eb0355dc285bae0314eb88aa1b25215c5781290c210a2363536d5ecba1e60620\" returns successfully" May 16 00:13:03.016140 containerd[1661]: time="2025-05-16T00:13:03.016103930Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:13:03.016226 containerd[1661]: time="2025-05-16T00:13:03.016200581Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:13:03.016226 containerd[1661]: time="2025-05-16T00:13:03.016213015Z" level=info msg="StopPodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:13:03.016540 containerd[1661]: time="2025-05-16T00:13:03.016504912Z" level=info msg="RemovePodSandbox for \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:13:03.016540 containerd[1661]: time="2025-05-16T00:13:03.016536181Z" level=info msg="Forcibly stopping sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\"" May 16 00:13:03.016664 containerd[1661]: time="2025-05-16T00:13:03.016610229Z" level=info msg="TearDown network for sandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" successfully" May 16 00:13:03.020787 containerd[1661]: time="2025-05-16T00:13:03.020474290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.020787 containerd[1661]: time="2025-05-16T00:13:03.020550733Z" level=info msg="RemovePodSandbox \"f0f5bb3369f8cdc24a03aea4ef652bee6e28b3246265f59f4d994715b3e68eab\" returns successfully" May 16 00:13:03.021248 containerd[1661]: time="2025-05-16T00:13:03.021210762Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:13:03.021336 containerd[1661]: time="2025-05-16T00:13:03.021310920Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:13:03.021336 containerd[1661]: time="2025-05-16T00:13:03.021329415Z" level=info msg="StopPodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:13:03.021697 containerd[1661]: time="2025-05-16T00:13:03.021644306Z" level=info msg="RemovePodSandbox for \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:13:03.021697 containerd[1661]: time="2025-05-16T00:13:03.021669663Z" level=info msg="Forcibly stopping sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\"" May 16 00:13:03.021826 containerd[1661]: time="2025-05-16T00:13:03.021731699Z" level=info msg="TearDown network for sandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" successfully" May 16 00:13:03.025370 containerd[1661]: time="2025-05-16T00:13:03.025302470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.025517 containerd[1661]: time="2025-05-16T00:13:03.025395955Z" level=info msg="RemovePodSandbox \"b1a130b5710bd7393b6b24f1e971d54f70de7892bf3e34c2089efa404512db4e\" returns successfully" May 16 00:13:03.025929 containerd[1661]: time="2025-05-16T00:13:03.025904059Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:13:03.026033 containerd[1661]: time="2025-05-16T00:13:03.026006451Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:13:03.026033 containerd[1661]: time="2025-05-16T00:13:03.026025487Z" level=info msg="StopPodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:13:03.026307 containerd[1661]: time="2025-05-16T00:13:03.026276127Z" level=info msg="RemovePodSandbox for \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:13:03.026395 containerd[1661]: time="2025-05-16T00:13:03.026330860Z" level=info msg="Forcibly stopping sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\"" May 16 00:13:03.026432 containerd[1661]: time="2025-05-16T00:13:03.026395541Z" level=info msg="TearDown network for sandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" successfully" May 16 00:13:03.029237 containerd[1661]: time="2025-05-16T00:13:03.029202497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.029378 containerd[1661]: time="2025-05-16T00:13:03.029245328Z" level=info msg="RemovePodSandbox \"5b7cd466fb042d12db2ef00eda23d0744c9fe82ec448c452f49df4f7aa310e11\" returns successfully" May 16 00:13:03.029722 containerd[1661]: time="2025-05-16T00:13:03.029563895Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:13:03.029722 containerd[1661]: time="2025-05-16T00:13:03.029653635Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:13:03.029722 containerd[1661]: time="2025-05-16T00:13:03.029666388Z" level=info msg="StopPodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:13:03.029907 containerd[1661]: time="2025-05-16T00:13:03.029866824Z" level=info msg="RemovePodSandbox for \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:13:03.029907 containerd[1661]: time="2025-05-16T00:13:03.029886481Z" level=info msg="Forcibly stopping sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\"" May 16 00:13:03.029999 containerd[1661]: time="2025-05-16T00:13:03.029961152Z" level=info msg="TearDown network for sandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" successfully" May 16 00:13:03.033017 containerd[1661]: time="2025-05-16T00:13:03.032978253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.033103 containerd[1661]: time="2025-05-16T00:13:03.033028617Z" level=info msg="RemovePodSandbox \"dd6797ef6c243022780df5d88f5e616e1cb490bc1ae01808755f2738b5451fd7\" returns successfully" May 16 00:13:03.033476 containerd[1661]: time="2025-05-16T00:13:03.033405133Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:13:03.033910 containerd[1661]: time="2025-05-16T00:13:03.033851982Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:13:03.033989 containerd[1661]: time="2025-05-16T00:13:03.033972368Z" level=info msg="StopPodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:13:03.035029 containerd[1661]: time="2025-05-16T00:13:03.034735158Z" level=info msg="RemovePodSandbox for \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:13:03.035029 containerd[1661]: time="2025-05-16T00:13:03.034764405Z" level=info msg="Forcibly stopping sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\"" May 16 00:13:03.035029 containerd[1661]: time="2025-05-16T00:13:03.034847971Z" level=info msg="TearDown network for sandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" successfully" May 16 00:13:03.050455 containerd[1661]: time="2025-05-16T00:13:03.050382410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.050455 containerd[1661]: time="2025-05-16T00:13:03.050450157Z" level=info msg="RemovePodSandbox \"66f0bd213ba50e55fb9dad40df63d742e3f015785e14512f61338c0c28fd6490\" returns successfully" May 16 00:13:03.051098 containerd[1661]: time="2025-05-16T00:13:03.051065391Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:13:03.051186 containerd[1661]: time="2025-05-16T00:13:03.051162703Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:13:03.051186 containerd[1661]: time="2025-05-16T00:13:03.051175698Z" level=info msg="StopPodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:13:03.052744 containerd[1661]: time="2025-05-16T00:13:03.051500948Z" level=info msg="RemovePodSandbox for \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:13:03.052744 containerd[1661]: time="2025-05-16T00:13:03.051544380Z" level=info msg="Forcibly stopping sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\"" May 16 00:13:03.052744 containerd[1661]: time="2025-05-16T00:13:03.051628377Z" level=info msg="TearDown network for sandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" successfully" May 16 00:13:03.055113 containerd[1661]: time="2025-05-16T00:13:03.055059044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.055201 containerd[1661]: time="2025-05-16T00:13:03.055134596Z" level=info msg="RemovePodSandbox \"797e9658c1b81301227ac4bb4414c983f4d92ae58011d47769583f9644f8f98e\" returns successfully" May 16 00:13:03.055634 containerd[1661]: time="2025-05-16T00:13:03.055584981Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:13:03.055889 containerd[1661]: time="2025-05-16T00:13:03.055851211Z" level=info msg="TearDown network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" successfully" May 16 00:13:03.055889 containerd[1661]: time="2025-05-16T00:13:03.055874394Z" level=info msg="StopPodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" returns successfully" May 16 00:13:03.056978 containerd[1661]: time="2025-05-16T00:13:03.056215214Z" level=info msg="RemovePodSandbox for \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:13:03.056978 containerd[1661]: time="2025-05-16T00:13:03.056257262Z" level=info msg="Forcibly stopping sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\"" May 16 00:13:03.056978 containerd[1661]: time="2025-05-16T00:13:03.056386284Z" level=info msg="TearDown network for sandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" successfully" May 16 00:13:03.059758 containerd[1661]: time="2025-05-16T00:13:03.059717736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.059758 containerd[1661]: time="2025-05-16T00:13:03.059769323Z" level=info msg="RemovePodSandbox \"f1a59fdded99560c34561bd58e101bc8b77368e579c8aefebffb9f7e2b6958ce\" returns successfully" May 16 00:13:03.060159 containerd[1661]: time="2025-05-16T00:13:03.060130570Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" May 16 00:13:03.060276 containerd[1661]: time="2025-05-16T00:13:03.060236509Z" level=info msg="TearDown network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" successfully" May 16 00:13:03.060276 containerd[1661]: time="2025-05-16T00:13:03.060255956Z" level=info msg="StopPodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" returns successfully" May 16 00:13:03.060670 containerd[1661]: time="2025-05-16T00:13:03.060610291Z" level=info msg="RemovePodSandbox for \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" May 16 00:13:03.060670 containerd[1661]: time="2025-05-16T00:13:03.060638855Z" level=info msg="Forcibly stopping sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\"" May 16 00:13:03.060768 containerd[1661]: time="2025-05-16T00:13:03.060707192Z" level=info msg="TearDown network for sandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" successfully" May 16 00:13:03.064871 containerd[1661]: time="2025-05-16T00:13:03.064827714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.064950 containerd[1661]: time="2025-05-16T00:13:03.064881335Z" level=info msg="RemovePodSandbox \"c810c0ae20383ed7db4036f4ebd580c5cd0b4c30a790307440643c198b514651\" returns successfully" May 16 00:13:03.065339 containerd[1661]: time="2025-05-16T00:13:03.065220832Z" level=info msg="StopPodSandbox for \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\"" May 16 00:13:03.065339 containerd[1661]: time="2025-05-16T00:13:03.065316601Z" level=info msg="TearDown network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" successfully" May 16 00:13:03.065339 containerd[1661]: time="2025-05-16T00:13:03.065329044Z" level=info msg="StopPodSandbox for \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" returns successfully" May 16 00:13:03.065731 containerd[1661]: time="2025-05-16T00:13:03.065689652Z" level=info msg="RemovePodSandbox for \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\"" May 16 00:13:03.065731 containerd[1661]: time="2025-05-16T00:13:03.065720249Z" level=info msg="Forcibly stopping sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\"" May 16 00:13:03.065841 containerd[1661]: time="2025-05-16T00:13:03.065787124Z" level=info msg="TearDown network for sandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" successfully" May 16 00:13:03.078643 containerd[1661]: time="2025-05-16T00:13:03.078394490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 00:13:03.078643 containerd[1661]: time="2025-05-16T00:13:03.078497713Z" level=info msg="RemovePodSandbox \"fa2c55aad47365aa44d3b117b42884999bca477ec97798435ea2379045b3a606\" returns successfully" May 16 00:13:03.950358 kubelet[2242]: E0516 00:13:03.950252 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:04.951290 kubelet[2242]: E0516 00:13:04.951237 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:05.951706 kubelet[2242]: E0516 00:13:05.951615 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:06.952598 kubelet[2242]: E0516 00:13:06.952529 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:07.953341 kubelet[2242]: E0516 00:13:07.953215 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:08.954267 kubelet[2242]: E0516 00:13:08.954183 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:09.955153 kubelet[2242]: E0516 00:13:09.954506 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:10.955121 kubelet[2242]: E0516 00:13:10.955053 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:11.955668 kubelet[2242]: E0516 00:13:11.955579 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:12.956433 kubelet[2242]: E0516 00:13:12.956370 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:13.957412 kubelet[2242]: E0516 00:13:13.957317 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:14.958121 kubelet[2242]: E0516 00:13:14.958054 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:15.959069 kubelet[2242]: E0516 00:13:15.959007 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:16.959932 kubelet[2242]: E0516 00:13:16.959842 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:17.734096 kubelet[2242]: E0516 00:13:17.734031 2242 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57718->10.0.0.2:2379: read: connection timed out" May 16 00:13:17.960335 kubelet[2242]: E0516 00:13:17.960243 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:18.961339 kubelet[2242]: E0516 00:13:18.961268 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:19.961807 kubelet[2242]: E0516 00:13:19.961717 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:20.962522 kubelet[2242]: E0516 00:13:20.962442 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:21.963326 kubelet[2242]: E0516 00:13:21.963234 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:22.907511 kubelet[2242]: E0516 00:13:22.907456 2242 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:22.963790 kubelet[2242]: E0516 00:13:22.963728 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:23.965070 kubelet[2242]: E0516 00:13:23.964968 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:24.965630 kubelet[2242]: E0516 00:13:24.965540 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:25.965908 kubelet[2242]: E0516 00:13:25.965826 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:26.966376 kubelet[2242]: E0516 00:13:26.966291 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:27.735331 kubelet[2242]: E0516 00:13:27.735216 2242 controller.go:195] "Failed to update lease" err="Put \"https://37.27.195.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": context deadline exceeded" May 16 00:13:27.967282 kubelet[2242]: E0516 00:13:27.967182 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:28.967581 kubelet[2242]: E0516 00:13:28.967512 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:29.967913 kubelet[2242]: E0516 00:13:29.967843 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:30.969041 kubelet[2242]: E0516 00:13:30.968984 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:31.969519 kubelet[2242]: E0516 00:13:31.969459 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:32.970476 kubelet[2242]: E0516 00:13:32.970400 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:33.284658 update_engine[1650]: I20250516 00:13:33.284574 1650 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 16 00:13:33.284658 update_engine[1650]: I20250516 00:13:33.284642 1650 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 16 00:13:33.285170 update_engine[1650]: I20250516 00:13:33.284911 1650 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 16 00:13:33.285970 update_engine[1650]: I20250516 00:13:33.285938 1650 omaha_request_params.cc:62] Current group set to stable May 16 00:13:33.288485 update_engine[1650]: I20250516 00:13:33.288300 1650 update_attempter.cc:499] Already updated boot flags. Skipping. May 16 00:13:33.288485 update_engine[1650]: I20250516 00:13:33.288372 1650 update_attempter.cc:643] Scheduling an action processor start. May 16 00:13:33.288485 update_engine[1650]: I20250516 00:13:33.288398 1650 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 00:13:33.288485 update_engine[1650]: I20250516 00:13:33.288456 1650 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 16 00:13:33.288675 update_engine[1650]: I20250516 00:13:33.288558 1650 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 00:13:33.288675 update_engine[1650]: I20250516 00:13:33.288568 1650 omaha_request_action.cc:272] Request: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: May 16 00:13:33.288675 update_engine[1650]: I20250516 00:13:33.288574 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 00:13:33.289093 locksmithd[1690]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 16 00:13:33.293167 update_engine[1650]: I20250516 00:13:33.293112 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 00:13:33.293556 update_engine[1650]: I20250516 00:13:33.293522 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 00:13:33.295748 update_engine[1650]: E20250516 00:13:33.295656 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 00:13:33.295938 update_engine[1650]: I20250516 00:13:33.295789 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 16 00:13:33.971721 kubelet[2242]: E0516 00:13:33.971641 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:34.971970 kubelet[2242]: E0516 00:13:34.971877 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:35.972403 kubelet[2242]: E0516 00:13:35.972331 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:36.973326 kubelet[2242]: E0516 00:13:36.973197 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:37.736577 kubelet[2242]: E0516 00:13:37.736482 2242 controller.go:195] "Failed to update lease" err="Put \"https://37.27.195.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 16 00:13:37.973625 kubelet[2242]: E0516 00:13:37.973546 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:38.974264 kubelet[2242]: E0516 00:13:38.974194 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:39.975282 kubelet[2242]: E0516 00:13:39.975193 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:13:40.975993 kubelet[2242]: E0516 00:13:40.975915 2242 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"