Sep 9 00:20:06.936061 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:20:06.936084 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:20:06.936096 kernel: BIOS-provided physical RAM map: Sep 9 00:20:06.936102 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:20:06.936108 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:20:06.936115 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:20:06.936122 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:20:06.936129 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:20:06.936135 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 9 00:20:06.936141 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 9 00:20:06.936151 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 9 00:20:06.936157 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 9 00:20:06.936166 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 9 00:20:06.936173 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 9 00:20:06.936184 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 9 00:20:06.936191 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:20:06.936201 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 9 00:20:06.936208 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 9 00:20:06.936215 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:20:06.936221 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:20:06.936228 kernel: NX (Execute Disable) protection: active Sep 9 00:20:06.936235 kernel: APIC: Static calls initialized Sep 9 00:20:06.936242 kernel: efi: EFI v2.7 by EDK II Sep 9 00:20:06.936249 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 9 00:20:06.936267 kernel: SMBIOS 2.8 present. Sep 9 00:20:06.936276 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 9 00:20:06.936283 kernel: Hypervisor detected: KVM Sep 9 00:20:06.936295 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:20:06.936310 kernel: kvm-clock: using sched offset of 6303094574 cycles Sep 9 00:20:06.936326 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:20:06.936334 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:20:06.937504 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:20:06.937512 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:20:06.937519 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 9 00:20:06.937552 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:20:06.937560 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:20:06.937573 kernel: Using GB pages for direct mapping Sep 9 00:20:06.937580 kernel: Secure boot disabled Sep 9 00:20:06.937587 kernel: ACPI: Early table checksum verification disabled Sep 9 00:20:06.937594 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:20:06.937606 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:20:06.937613 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937621 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937631 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:20:06.937638 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937649 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937656 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937664 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:20:06.937671 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:20:06.937679 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:20:06.937689 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:20:06.937696 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:20:06.937703 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:20:06.937711 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:20:06.937718 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:20:06.937725 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:20:06.937733 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:20:06.937740 kernel: No NUMA configuration found Sep 9 00:20:06.937749 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 9 00:20:06.937759 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 9 00:20:06.937767 kernel: Zone ranges: Sep 9 00:20:06.937774 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:20:06.937782 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 9 00:20:06.937789 kernel: Normal empty Sep 9 00:20:06.937797 kernel: Movable zone start for each node Sep 9 00:20:06.937804 kernel: Early memory node ranges Sep 9 00:20:06.937811 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:20:06.937819 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:20:06.937828 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:20:06.937836 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 9 00:20:06.937843 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 9 00:20:06.937850 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 9 00:20:06.937869 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 9 00:20:06.937877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:20:06.937884 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:20:06.937891 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:20:06.937899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:20:06.937906 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 9 00:20:06.937916 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 9 00:20:06.937924 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 9 00:20:06.937931 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:20:06.937938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:20:06.937946 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:20:06.937953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:20:06.937960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:20:06.937968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:20:06.937975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:20:06.937985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:20:06.937992 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:20:06.938000 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:20:06.938007 kernel: TSC deadline timer available Sep 9 00:20:06.938014 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:20:06.938022 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:20:06.938029 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:20:06.938036 kernel: kvm-guest: setup PV sched yield Sep 9 00:20:06.938044 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 9 00:20:06.938054 kernel: Booting paravirtualized kernel on KVM Sep 9 00:20:06.938061 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:20:06.938069 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:20:06.938076 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:20:06.938083 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:20:06.938091 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:20:06.938098 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:20:06.938105 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:20:06.938116 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:20:06.938131 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:20:06.938139 kernel: random: crng init done Sep 9 00:20:06.938147 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:20:06.938154 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:20:06.938162 kernel: Fallback order for Node 0: 0 Sep 9 00:20:06.938169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 9 00:20:06.938176 kernel: Policy zone: DMA32 Sep 9 00:20:06.938184 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:20:06.938194 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 166140K reserved, 0K cma-reserved) Sep 9 00:20:06.938202 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:20:06.938209 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:20:06.938216 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:20:06.938224 kernel: Dynamic Preempt: voluntary Sep 9 00:20:06.938239 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:20:06.938250 kernel: rcu: RCU event tracing is enabled. Sep 9 00:20:06.938258 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:20:06.938266 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:20:06.938274 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:20:06.938282 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:20:06.938289 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:20:06.938300 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:20:06.938308 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:20:06.938318 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:20:06.938326 kernel: Console: colour dummy device 80x25 Sep 9 00:20:06.938336 kernel: printk: console [ttyS0] enabled Sep 9 00:20:06.938344 kernel: ACPI: Core revision 20230628 Sep 9 00:20:06.938352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:20:06.938359 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:20:06.938367 kernel: x2apic enabled Sep 9 00:20:06.938375 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:20:06.938382 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:20:06.938390 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:20:06.938398 kernel: kvm-guest: setup PV IPIs Sep 9 00:20:06.938406 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:20:06.938416 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:20:06.938424 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:20:06.938431 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:20:06.938439 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:20:06.938447 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:20:06.938454 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:20:06.938462 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:20:06.938470 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:20:06.938480 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:20:06.938488 kernel: active return thunk: retbleed_return_thunk Sep 9 00:20:06.938495 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:20:06.938503 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:20:06.938511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:20:06.938521 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:20:06.938529 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:20:06.938548 kernel: active return thunk: srso_return_thunk Sep 9 00:20:06.938556 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:20:06.938567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:20:06.938575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:20:06.938583 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:20:06.938590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:20:06.938598 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:20:06.938606 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:20:06.938613 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:20:06.938621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:20:06.938629 kernel: landlock: Up and running. Sep 9 00:20:06.938639 kernel: SELinux: Initializing. Sep 9 00:20:06.938646 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:20:06.938654 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:20:06.938662 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:20:06.938670 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:20:06.938678 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:20:06.938685 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:20:06.938693 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:20:06.938704 kernel: ... version: 0 Sep 9 00:20:06.938711 kernel: ... bit width: 48 Sep 9 00:20:06.938719 kernel: ... generic registers: 6 Sep 9 00:20:06.938726 kernel: ... value mask: 0000ffffffffffff Sep 9 00:20:06.938734 kernel: ... max period: 00007fffffffffff Sep 9 00:20:06.938742 kernel: ... fixed-purpose events: 0 Sep 9 00:20:06.938749 kernel: ... event mask: 000000000000003f Sep 9 00:20:06.938757 kernel: signal: max sigframe size: 1776 Sep 9 00:20:06.938764 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:20:06.938772 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:20:06.938783 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:20:06.938790 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:20:06.938798 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:20:06.938806 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:20:06.938813 kernel: smpboot: Max logical packages: 1 Sep 9 00:20:06.938821 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:20:06.938828 kernel: devtmpfs: initialized Sep 9 00:20:06.938836 kernel: x86/mm: Memory block size: 128MB Sep 9 00:20:06.938844 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:20:06.938854 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:20:06.938869 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 9 00:20:06.938877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:20:06.938885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:20:06.938893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:20:06.938900 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:20:06.938908 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:20:06.938916 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:20:06.938924 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:20:06.938934 kernel: audit: type=2000 audit(1757377205.677:1): state=initialized audit_enabled=0 res=1 Sep 9 00:20:06.938942 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:20:06.938950 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:20:06.938957 kernel: cpuidle: using governor menu Sep 9 00:20:06.938965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:20:06.938973 kernel: dca service started, version 1.12.1 Sep 9 00:20:06.938981 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 00:20:06.938989 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:20:06.938996 kernel: PCI: Using configuration type 1 for base access Sep 9 00:20:06.939007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:20:06.939014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:20:06.939022 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:20:06.939030 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:20:06.939038 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:20:06.939045 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:20:06.939053 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:20:06.939061 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:20:06.939069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:20:06.939079 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:20:06.939086 kernel: ACPI: Interpreter enabled Sep 9 00:20:06.939094 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:20:06.939102 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:20:06.939110 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:20:06.939117 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:20:06.939125 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:20:06.939133 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:20:06.939396 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:20:06.939678 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:20:06.939908 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:20:06.939925 kernel: PCI host bridge to bus 0000:00 Sep 9 00:20:06.940130 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:20:06.940285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:20:06.940430 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:20:06.940573 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:20:06.940691 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:20:06.940805 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 9 00:20:06.940932 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:20:06.941111 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:20:06.942359 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:20:06.942622 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:20:06.942767 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 9 00:20:06.942906 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 9 00:20:06.943039 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 9 00:20:06.943221 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:20:06.943421 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:20:06.943603 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 9 00:20:06.943756 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 9 00:20:06.943898 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 9 00:20:06.944048 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:20:06.945386 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 9 00:20:06.945521 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 9 00:20:06.945665 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 9 00:20:06.945810 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:20:06.945957 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 9 00:20:06.946090 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 9 00:20:06.946225 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 9 00:20:06.946352 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 9 00:20:06.946499 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:20:06.946645 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:20:06.946791 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:20:06.946962 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 9 00:20:06.947093 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 9 00:20:06.947240 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:20:06.947369 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 9 00:20:06.947380 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:20:06.947389 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:20:06.947397 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:20:06.947410 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:20:06.947418 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:20:06.947426 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:20:06.947434 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:20:06.947442 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:20:06.947450 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:20:06.947457 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:20:06.947465 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:20:06.947473 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:20:06.947484 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:20:06.947492 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:20:06.947500 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:20:06.947507 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:20:06.947515 kernel: iommu: Default domain type: Translated Sep 9 00:20:06.947523 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:20:06.947531 kernel: efivars: Registered efivars operations Sep 9 00:20:06.947599 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:20:06.947607 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:20:06.947619 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:20:06.947627 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 9 00:20:06.947635 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 9 00:20:06.947642 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 9 00:20:06.947774 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:20:06.947912 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:20:06.948039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:20:06.948050 kernel: vgaarb: loaded Sep 9 00:20:06.948058 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:20:06.948072 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:20:06.948081 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:20:06.948089 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:20:06.948097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:20:06.948105 kernel: pnp: PnP ACPI init Sep 9 00:20:06.948292 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:20:06.948305 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:20:06.948313 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:20:06.948325 kernel: NET: Registered PF_INET protocol family Sep 9 00:20:06.948333 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:20:06.948341 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:20:06.948349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:20:06.948357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:20:06.948365 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:20:06.948373 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:20:06.948381 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:20:06.948389 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:20:06.948400 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:20:06.948408 kernel: NET: Registered PF_XDP protocol family Sep 9 00:20:06.948552 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 9 00:20:06.948723 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 9 00:20:06.949980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:20:06.950102 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:20:06.950216 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:20:06.950330 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:20:06.950451 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:20:06.950581 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 9 00:20:06.950593 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:20:06.950601 kernel: Initialise system trusted keyrings Sep 9 00:20:06.950610 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:20:06.950618 kernel: Key type asymmetric registered Sep 9 00:20:06.950626 kernel: Asymmetric key parser 'x509' registered Sep 9 00:20:06.950634 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:20:06.950646 kernel: io scheduler mq-deadline registered Sep 9 00:20:06.950654 kernel: io scheduler kyber registered Sep 9 00:20:06.950662 kernel: io scheduler bfq registered Sep 9 00:20:06.950670 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:20:06.950679 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:20:06.950687 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:20:06.950695 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:20:06.950703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:20:06.950711 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:20:06.950722 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:20:06.950730 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:20:06.950738 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:20:06.950882 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:20:06.950894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:20:06.951047 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:20:06.951173 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:20:06 UTC (1757377206) Sep 9 00:20:06.951293 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:20:06.951309 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:20:06.951317 kernel: efifb: probing for efifb Sep 9 00:20:06.951325 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 9 00:20:06.951333 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 9 00:20:06.951341 kernel: efifb: scrolling: redraw Sep 9 00:20:06.951349 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 9 00:20:06.951357 kernel: Console: switching to colour frame buffer device 100x37 Sep 9 00:20:06.951383 kernel: fb0: EFI VGA frame buffer device Sep 9 00:20:06.951394 kernel: pstore: Using crash dump compression: deflate Sep 9 00:20:06.951406 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:20:06.951414 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:20:06.951422 kernel: Segment Routing with IPv6 Sep 9 00:20:06.951430 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:20:06.951439 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:20:06.951447 kernel: Key type dns_resolver registered Sep 9 00:20:06.951455 kernel: IPI shorthand broadcast: enabled Sep 9 00:20:06.951463 kernel: sched_clock: Marking stable (984002618, 139588894)->(1155529553, -31938041) Sep 9 00:20:06.951472 kernel: registered taskstats version 1 Sep 9 00:20:06.951482 kernel: Loading compiled-in X.509 certificates Sep 9 00:20:06.951491 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:20:06.951500 kernel: Key type .fscrypt registered Sep 9 00:20:06.951508 kernel: Key type fscrypt-provisioning registered Sep 9 00:20:06.951516 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:20:06.951524 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:20:06.951532 kernel: ima: No architecture policies found Sep 9 00:20:06.951556 kernel: clk: Disabling unused clocks Sep 9 00:20:06.951564 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:20:06.951576 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:20:06.951584 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:20:06.951592 kernel: Run /init as init process Sep 9 00:20:06.951600 kernel: with arguments: Sep 9 00:20:06.951608 kernel: /init Sep 9 00:20:06.951617 kernel: with environment: Sep 9 00:20:06.951625 kernel: HOME=/ Sep 9 00:20:06.951632 kernel: TERM=linux Sep 9 00:20:06.951643 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:20:06.951657 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:20:06.951667 systemd[1]: Detected virtualization kvm. Sep 9 00:20:06.951676 systemd[1]: Detected architecture x86-64. Sep 9 00:20:06.951685 systemd[1]: Running in initrd. Sep 9 00:20:06.951699 systemd[1]: No hostname configured, using default hostname. Sep 9 00:20:06.951707 systemd[1]: Hostname set to . Sep 9 00:20:06.951716 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:20:06.951725 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:20:06.951733 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:06.951742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:06.951752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:20:06.951761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:20:06.951772 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:20:06.951781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:20:06.951792 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:20:06.951801 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:20:06.951809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:06.951818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:06.951827 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:20:06.951838 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:20:06.951847 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:20:06.951855 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:20:06.951873 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:20:06.951882 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:20:06.951891 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:20:06.951899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:20:06.951908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:06.951920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:06.951929 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:06.951937 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:20:06.951946 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:20:06.951955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:20:06.951964 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:20:06.951972 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:20:06.951981 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:20:06.951990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:20:06.952001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:06.952009 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:20:06.952018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:06.952027 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:20:06.952036 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:20:06.952072 systemd-journald[190]: Collecting audit messages is disabled. Sep 9 00:20:06.952094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:06.952103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:06.952115 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:20:06.952124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:20:06.952133 systemd-journald[190]: Journal started Sep 9 00:20:06.952151 systemd-journald[190]: Runtime Journal (/run/log/journal/638aea801f5d42598db3897c6c953ac4) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:20:06.941130 systemd-modules-load[193]: Inserted module 'overlay' Sep 9 00:20:06.955127 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:20:06.955727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:20:06.975596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:20:06.975950 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:06.979047 kernel: Bridge firewalling registered Sep 9 00:20:06.979044 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 9 00:20:06.979428 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:06.981409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:06.997688 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:20:06.999451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:07.000749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:07.010978 dracut-cmdline[221]: dracut-dracut-053 Sep 9 00:20:07.013471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:07.016129 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:20:07.026744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:20:07.062473 systemd-resolved[241]: Positive Trust Anchors: Sep 9 00:20:07.062495 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:20:07.062526 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:20:07.065293 systemd-resolved[241]: Defaulting to hostname 'linux'. Sep 9 00:20:07.066528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:20:07.072620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:07.121594 kernel: SCSI subsystem initialized Sep 9 00:20:07.131581 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:20:07.142592 kernel: iscsi: registered transport (tcp) Sep 9 00:20:07.165593 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:20:07.165680 kernel: QLogic iSCSI HBA Driver Sep 9 00:20:07.225900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:20:07.235843 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:20:07.264458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:20:07.264564 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:20:07.264582 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:20:07.307588 kernel: raid6: avx2x4 gen() 29233 MB/s Sep 9 00:20:07.324589 kernel: raid6: avx2x2 gen() 28976 MB/s Sep 9 00:20:07.341642 kernel: raid6: avx2x1 gen() 23679 MB/s Sep 9 00:20:07.341690 kernel: raid6: using algorithm avx2x4 gen() 29233 MB/s Sep 9 00:20:07.359653 kernel: raid6: .... xor() 6946 MB/s, rmw enabled Sep 9 00:20:07.359684 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:20:07.381589 kernel: xor: automatically using best checksumming function avx Sep 9 00:20:07.562598 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:20:07.579805 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:20:07.589833 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:07.602678 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 9 00:20:07.607582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:07.621762 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:20:07.638154 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 9 00:20:07.682241 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:20:07.689916 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:20:07.767235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:07.778779 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:20:07.796430 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:20:07.798305 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:20:07.801944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:07.803266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:20:07.810628 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:20:07.813437 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:20:07.816698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:20:07.821166 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:20:07.821186 kernel: GPT:9289727 != 19775487 Sep 9 00:20:07.821200 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:20:07.821213 kernel: GPT:9289727 != 19775487 Sep 9 00:20:07.821235 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:20:07.821248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:07.832677 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:20:07.837569 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:20:07.850609 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:20:07.850676 kernel: AES CTR mode by8 optimization enabled Sep 9 00:20:07.859599 kernel: libata version 3.00 loaded. Sep 9 00:20:07.863310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:20:07.863515 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:07.870611 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:07.880818 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (464) Sep 9 00:20:07.880880 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Sep 9 00:20:07.876077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:20:07.876402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:07.877959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:07.891974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:07.898661 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:20:07.898957 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:20:07.898976 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:20:07.899177 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:20:07.907565 kernel: scsi host0: ahci Sep 9 00:20:07.907807 kernel: scsi host1: ahci Sep 9 00:20:07.907996 kernel: scsi host2: ahci Sep 9 00:20:07.909567 kernel: scsi host3: ahci Sep 9 00:20:07.909768 kernel: scsi host4: ahci Sep 9 00:20:07.910265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:07.918757 kernel: scsi host5: ahci Sep 9 00:20:07.919118 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 9 00:20:07.919137 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 9 00:20:07.919152 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 9 00:20:07.919166 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 9 00:20:07.919180 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 9 00:20:07.919202 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 9 00:20:07.928434 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:20:07.937747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:20:07.943186 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:20:07.944671 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:20:07.949457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:20:07.964805 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:20:07.968278 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:20:07.980637 disk-uuid[568]: Primary Header is updated. Sep 9 00:20:07.980637 disk-uuid[568]: Secondary Entries is updated. Sep 9 00:20:07.980637 disk-uuid[568]: Secondary Header is updated. Sep 9 00:20:07.984594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:07.989584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:07.990229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:07.998586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:07.999602 kernel: block device autoloading is deprecated and will be removed. Sep 9 00:20:08.228625 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:20:08.228715 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:20:08.228738 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:20:08.230592 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:20:08.230684 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:20:08.231568 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:20:08.232602 kernel: ata3.00: applying bridge limits Sep 9 00:20:08.233577 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:20:08.233603 kernel: ata3.00: configured for UDMA/100 Sep 9 00:20:08.234590 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:20:08.280642 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:20:08.281165 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:20:08.295589 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:20:09.017580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:20:09.017843 disk-uuid[572]: The operation has completed successfully. Sep 9 00:20:09.052898 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:20:09.053051 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:20:09.072748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:20:09.078906 sh[596]: Success Sep 9 00:20:09.094575 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:20:09.130628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:20:09.143486 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:20:09.146676 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:20:09.162654 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:20:09.162710 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:20:09.162726 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:20:09.164574 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:20:09.164609 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:20:09.169963 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:20:09.171781 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:20:09.185732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:20:09.187584 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:20:09.201198 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:20:09.201267 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:20:09.201284 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:09.204607 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:09.219362 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:20:09.221379 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:20:09.230204 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:20:09.241885 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:20:09.311650 ignition[690]: Ignition 2.19.0 Sep 9 00:20:09.311665 ignition[690]: Stage: fetch-offline Sep 9 00:20:09.311721 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:09.311733 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:09.311862 ignition[690]: parsed url from cmdline: "" Sep 9 00:20:09.311867 ignition[690]: no config URL provided Sep 9 00:20:09.311872 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:20:09.311890 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:20:09.311921 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 9 00:20:09.311926 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:20:09.322500 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 9 00:20:09.323993 ignition[690]: parsing config with SHA512: b936377cdb7bad393e8fc10713922144e2fb26a008494d30b2665f2e8dca73b08462ff5481735d6989dc1682287ecc83e932b55d9cdb1f31e3f1761552c56e88 Sep 9 00:20:09.327391 unknown[690]: fetched base config from "system" Sep 9 00:20:09.327680 ignition[690]: fetch-offline: fetch-offline passed Sep 9 00:20:09.327403 unknown[690]: fetched user config from "qemu" Sep 9 00:20:09.327743 ignition[690]: Ignition finished successfully Sep 9 00:20:09.330639 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:20:09.349845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:20:09.361872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:20:09.386921 systemd-networkd[787]: lo: Link UP Sep 9 00:20:09.386931 systemd-networkd[787]: lo: Gained carrier Sep 9 00:20:09.388827 systemd-networkd[787]: Enumeration completed Sep 9 00:20:09.388958 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:20:09.389303 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:09.389307 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:20:09.391990 systemd[1]: Reached target network.target - Network. Sep 9 00:20:09.392068 systemd-networkd[787]: eth0: Link UP Sep 9 00:20:09.392073 systemd-networkd[787]: eth0: Gained carrier Sep 9 00:20:09.392082 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:09.394010 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:20:09.403705 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:20:09.411618 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:20:09.424441 ignition[789]: Ignition 2.19.0 Sep 9 00:20:09.424455 ignition[789]: Stage: kargs Sep 9 00:20:09.424704 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:09.424724 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:09.429149 ignition[789]: kargs: kargs passed Sep 9 00:20:09.429204 ignition[789]: Ignition finished successfully Sep 9 00:20:09.433622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:20:09.441177 systemd-resolved[241]: Detected conflict on linux IN A 10.0.0.26 Sep 9 00:20:09.441195 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Sep 9 00:20:09.446766 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:20:09.461621 ignition[798]: Ignition 2.19.0 Sep 9 00:20:09.461635 ignition[798]: Stage: disks Sep 9 00:20:09.461837 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:09.461850 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:09.464755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:20:09.462452 ignition[798]: disks: disks passed Sep 9 00:20:09.466692 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:20:09.462503 ignition[798]: Ignition finished successfully Sep 9 00:20:09.468554 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:20:09.470432 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:20:09.472640 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:20:09.473681 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:20:09.484707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:20:09.523563 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:20:09.681681 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:20:09.691741 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:20:09.785567 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:20:09.785908 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:20:09.787497 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:20:09.798705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:20:09.800926 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:20:09.802144 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:20:09.802195 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:20:09.815560 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 9 00:20:09.815591 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:20:09.815607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:20:09.815621 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:09.815633 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:09.802224 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:20:09.811188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:20:09.817292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:20:09.830743 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:20:09.870387 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:20:09.876560 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:20:09.881470 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:20:09.887239 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:20:09.995707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:20:10.006681 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:20:10.008893 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:20:10.017575 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:20:10.042886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:20:10.046985 ignition[928]: INFO : Ignition 2.19.0 Sep 9 00:20:10.046985 ignition[928]: INFO : Stage: mount Sep 9 00:20:10.049071 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:10.049071 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:10.049071 ignition[928]: INFO : mount: mount passed Sep 9 00:20:10.049071 ignition[928]: INFO : Ignition finished successfully Sep 9 00:20:10.051749 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:20:10.062919 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:20:10.162059 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:20:10.174910 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:20:10.182589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 9 00:20:10.182661 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:20:10.183581 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:20:10.185000 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:20:10.187578 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:20:10.189571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:20:10.210458 ignition[957]: INFO : Ignition 2.19.0 Sep 9 00:20:10.210458 ignition[957]: INFO : Stage: files Sep 9 00:20:10.213254 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:10.213254 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:10.213254 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:20:10.213254 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:20:10.213254 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:20:10.222140 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:20:10.222140 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:20:10.222140 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:20:10.222140 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:20:10.214968 unknown[957]: wrote ssh authorized keys file for user: core Sep 9 00:20:10.452746 systemd-networkd[787]: eth0: Gained IPv6LL Sep 9 00:20:10.540780 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 9 00:20:11.971358 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:20:11.971358 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 9 00:20:11.975825 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:20:11.975825 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:20:11.975825 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 9 00:20:11.975825 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:20:12.009060 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:20:12.079959 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:20:12.082089 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:20:12.083987 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:20:12.085865 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:20:12.087585 ignition[957]: INFO : files: files passed Sep 9 00:20:12.087585 ignition[957]: INFO : Ignition finished successfully Sep 9 00:20:12.092129 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:20:12.104831 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:20:12.106283 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:20:12.110713 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:20:12.110928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:20:12.119984 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:20:12.123805 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:12.123805 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:12.127070 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:20:12.127154 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:20:12.130189 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:20:12.138763 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:20:12.173654 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:20:12.173819 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:20:12.176254 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:20:12.178448 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:20:12.178619 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:20:12.179636 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:20:12.202128 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:20:12.211811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:20:12.225778 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:12.227203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:12.229450 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:20:12.231583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:20:12.231876 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:20:12.234122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:20:12.235656 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:20:12.237712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:20:12.239796 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:20:12.241948 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:20:12.244406 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:20:12.246769 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:20:12.249245 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:20:12.251299 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:20:12.253509 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:20:12.255319 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:20:12.255561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:20:12.257885 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:12.259294 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:12.261341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:20:12.261480 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:12.263711 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:20:12.263982 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:20:12.266284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:20:12.266483 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:20:12.268459 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:20:12.270167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:20:12.270322 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:12.272902 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:20:12.274724 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:20:12.276735 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:20:12.277036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:20:12.278691 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:20:12.278825 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:20:12.280885 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:20:12.281040 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:20:12.282887 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:20:12.283028 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:20:12.302947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:20:12.304988 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:20:12.305198 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:12.308298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:20:12.309160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:20:12.309291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:12.311556 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:20:12.311677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:20:12.317900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:20:12.318052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:20:12.326283 ignition[1011]: INFO : Ignition 2.19.0 Sep 9 00:20:12.326283 ignition[1011]: INFO : Stage: umount Sep 9 00:20:12.328161 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:20:12.328161 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:20:12.328161 ignition[1011]: INFO : umount: umount passed Sep 9 00:20:12.328161 ignition[1011]: INFO : Ignition finished successfully Sep 9 00:20:12.329769 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:20:12.329896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:20:12.331570 systemd[1]: Stopped target network.target - Network. Sep 9 00:20:12.332992 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:20:12.333048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:20:12.334945 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:20:12.335004 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:20:12.336867 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:20:12.336918 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:20:12.338757 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:20:12.338831 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:20:12.341103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:20:12.343215 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:20:12.346639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:20:12.348594 systemd-networkd[787]: eth0: DHCPv6 lease lost Sep 9 00:20:12.351329 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:20:12.351481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:20:12.353753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:20:12.353796 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:12.362683 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:20:12.363727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:20:12.363786 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:20:12.366144 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:12.368793 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:20:12.368924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:20:12.377038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:20:12.377121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:12.589283 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:20:12.589395 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:12.809088 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:20:12.809198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:12.812925 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:20:12.813187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:12.815881 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:20:12.816030 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:20:12.818024 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:20:12.818172 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:20:12.822470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:20:12.822569 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:12.824090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:20:12.824146 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:12.826082 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:20:12.826155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:20:12.828358 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:20:12.828412 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:20:12.830063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:20:12.830131 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:20:12.832157 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:20:12.832210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:20:12.839692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:20:12.841153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:20:12.841215 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:12.843354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:20:12.843412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:12.847920 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:20:12.848040 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:20:12.850258 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:20:12.856696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:20:12.864222 systemd[1]: Switching root. Sep 9 00:20:12.906849 systemd-journald[190]: Journal stopped Sep 9 00:20:14.437454 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Sep 9 00:20:14.439690 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:20:14.439721 kernel: SELinux: policy capability open_perms=1 Sep 9 00:20:14.439747 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:20:14.439770 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:20:14.439791 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:20:14.439807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:20:14.439829 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:20:14.439845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:20:14.439860 kernel: audit: type=1403 audit(1757377213.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:20:14.439877 systemd[1]: Successfully loaded SELinux policy in 45.351ms. Sep 9 00:20:14.439906 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.530ms. Sep 9 00:20:14.439924 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:20:14.439941 systemd[1]: Detected virtualization kvm. Sep 9 00:20:14.439958 systemd[1]: Detected architecture x86-64. Sep 9 00:20:14.439974 systemd[1]: Detected first boot. Sep 9 00:20:14.439991 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:20:14.440016 zram_generator::config[1058]: No configuration found. Sep 9 00:20:14.440036 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:20:14.440052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:20:14.440069 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:20:14.440086 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:20:14.440103 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:20:14.440120 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:20:14.440135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:20:14.440150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:20:14.440164 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:20:14.440177 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:20:14.440189 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:20:14.440201 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:20:14.440214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:20:14.440226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:20:14.440244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:20:14.440257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:20:14.440275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:20:14.440288 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:20:14.440302 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:20:14.440315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:20:14.440327 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:20:14.440340 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:20:14.440353 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:20:14.440371 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:20:14.440384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:20:14.440396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:20:14.440409 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:20:14.440421 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:20:14.440434 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:20:14.440446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:20:14.440459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:20:14.440472 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:20:14.440485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:20:14.440500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:20:14.440513 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:20:14.440525 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:20:14.440537 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:20:14.440581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:20:14.440599 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:20:14.440615 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:20:14.440631 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:20:14.440666 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:20:14.440683 systemd[1]: Reached target machines.target - Containers. Sep 9 00:20:14.440700 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:20:14.440716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:20:14.440734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:20:14.440751 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:20:14.440767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:20:14.440787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:20:14.440803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:20:14.440827 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:20:14.440843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:20:14.440860 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:20:14.440876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:20:14.440892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:20:14.440908 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:20:14.440922 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:20:14.440934 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:20:14.440950 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:20:14.440962 kernel: fuse: init (API version 7.39) Sep 9 00:20:14.440974 kernel: loop: module loaded Sep 9 00:20:14.440986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:20:14.440999 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:20:14.441011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:20:14.441024 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:20:14.441036 systemd[1]: Stopped verity-setup.service. Sep 9 00:20:14.441049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:20:14.441064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:20:14.441080 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:20:14.441097 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:20:14.441111 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:20:14.441124 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:20:14.441139 kernel: ACPI: bus type drm_connector registered Sep 9 00:20:14.441151 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:20:14.441164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:20:14.441177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:20:14.441193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:20:14.441205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:20:14.441217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:20:14.441230 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:20:14.441242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:20:14.441282 systemd-journald[1132]: Collecting audit messages is disabled. Sep 9 00:20:14.441305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:20:14.441318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:20:14.441331 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:20:14.441343 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:20:14.441356 systemd-journald[1132]: Journal started Sep 9 00:20:14.441380 systemd-journald[1132]: Runtime Journal (/run/log/journal/638aea801f5d42598db3897c6c953ac4) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:20:14.133396 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:20:14.149803 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:20:14.150288 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:20:14.444770 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:20:14.445677 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:20:14.447596 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:20:14.447827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:20:14.449481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:20:14.451054 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:20:14.452900 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:20:14.467938 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:20:14.482694 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:20:14.485722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:20:14.487027 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:20:14.487072 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:20:14.489157 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:20:14.491737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:20:14.494522 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:20:14.496039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:20:14.499481 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:20:14.505075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:20:14.506859 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:20:14.508629 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:20:14.510184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:20:14.514380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:14.517306 systemd-journald[1132]: Time spent on flushing to /var/log/journal/638aea801f5d42598db3897c6c953ac4 is 24.954ms for 976 entries. Sep 9 00:20:14.517306 systemd-journald[1132]: System Journal (/var/log/journal/638aea801f5d42598db3897c6c953ac4) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:20:14.550902 systemd-journald[1132]: Received client request to flush runtime journal. Sep 9 00:20:14.550947 kernel: loop0: detected capacity change from 0 to 140768 Sep 9 00:20:14.518014 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:20:14.522771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:20:14.528117 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:20:14.529827 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:20:14.534168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:20:14.546088 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:20:14.548249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:20:14.560763 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:20:14.563168 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:20:14.565436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:14.577445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:20:14.589711 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:20:14.593891 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:20:14.596881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:20:14.597696 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:20:14.599855 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:20:14.613850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:20:14.615673 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:20:15.015713 kernel: loop1: detected capacity change from 0 to 229808 Sep 9 00:20:15.033017 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 9 00:20:15.033038 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 9 00:20:15.041084 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:20:15.147594 kernel: loop2: detected capacity change from 0 to 142488 Sep 9 00:20:15.194571 kernel: loop3: detected capacity change from 0 to 140768 Sep 9 00:20:15.208575 kernel: loop4: detected capacity change from 0 to 229808 Sep 9 00:20:15.218568 kernel: loop5: detected capacity change from 0 to 142488 Sep 9 00:20:15.226456 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:20:15.227141 (sd-merge)[1196]: Merged extensions into '/usr'. Sep 9 00:20:15.235210 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:20:15.235228 systemd[1]: Reloading... Sep 9 00:20:15.317563 zram_generator::config[1219]: No configuration found. Sep 9 00:20:15.409889 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:20:15.472621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:20:15.531334 systemd[1]: Reloading finished in 295 ms. Sep 9 00:20:15.567727 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:20:15.569519 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:20:15.589818 systemd[1]: Starting ensure-sysext.service... Sep 9 00:20:15.592532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:20:15.659732 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:20:15.660121 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:20:15.661158 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:20:15.661467 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 9 00:20:15.661591 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Sep 9 00:20:15.665394 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:20:15.665407 systemd-tmpfiles[1261]: Skipping /boot Sep 9 00:20:15.667361 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:20:15.667378 systemd[1]: Reloading... Sep 9 00:20:15.678757 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:20:15.678771 systemd-tmpfiles[1261]: Skipping /boot Sep 9 00:20:15.723578 zram_generator::config[1288]: No configuration found. Sep 9 00:20:15.873341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:20:15.923173 systemd[1]: Reloading finished in 255 ms. Sep 9 00:20:15.941209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:20:15.954000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:20:15.961252 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:20:15.963916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:20:15.966659 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:20:15.972162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:20:15.975380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:20:15.979731 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:20:15.986051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:20:15.992064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:20:15.992276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:20:16.002811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:20:16.006823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:20:16.009759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:20:16.012053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:20:16.013369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:20:16.013517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:20:16.015356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:20:16.020297 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:20:16.021435 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Sep 9 00:20:16.021925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:20:16.024069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:20:16.024302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:20:16.029888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:20:16.030075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:20:16.034170 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:20:16.034810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:20:16.042151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:20:16.045469 systemd[1]: Finished ensure-sysext.service. Sep 9 00:20:16.046704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:20:16.057576 augenrules[1367]: No rules Sep 9 00:20:16.064702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:20:16.066188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:20:16.066297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:20:16.073810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:20:16.077193 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:20:16.078747 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:20:16.080619 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:20:16.082444 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:20:16.098928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:20:16.110643 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:20:16.120072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1359) Sep 9 00:20:16.124106 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:20:16.183443 systemd-networkd[1376]: lo: Link UP Sep 9 00:20:16.183461 systemd-networkd[1376]: lo: Gained carrier Sep 9 00:20:16.185674 systemd-networkd[1376]: Enumeration completed Sep 9 00:20:16.185799 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:20:16.187505 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:16.187515 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:20:16.190113 systemd-networkd[1376]: eth0: Link UP Sep 9 00:20:16.190123 systemd-networkd[1376]: eth0: Gained carrier Sep 9 00:20:16.190137 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:16.219816 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:20:16.231863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:20:16.236079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:20:16.237631 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:20:16.237960 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:20:16.238901 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Sep 9 00:20:16.239012 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:20:16.243853 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:20:16.244113 systemd-timesyncd[1383]: Initial clock synchronization to Tue 2025-09-09 00:20:16.364612 UTC. Sep 9 00:20:16.244584 systemd-resolved[1332]: Positive Trust Anchors: Sep 9 00:20:16.244621 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:20:16.244660 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:20:16.252735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:20:16.253900 systemd-resolved[1332]: Defaulting to hostname 'linux'. Sep 9 00:20:16.254332 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:20:16.257035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:20:16.258590 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:20:16.259402 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:20:16.270113 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:20:16.270602 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:20:16.271579 systemd[1]: Reached target network.target - Network. Sep 9 00:20:16.273204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:20:16.282590 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 9 00:20:16.300054 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:20:16.300597 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:20:16.376592 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:20:16.393191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:16.413889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:20:16.414184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:16.429763 kernel: kvm_amd: TSC scaling supported Sep 9 00:20:16.429860 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:20:16.429879 kernel: kvm_amd: Nested Paging enabled Sep 9 00:20:16.430699 kernel: kvm_amd: LBR virtualization supported Sep 9 00:20:16.430732 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:20:16.431681 kernel: kvm_amd: Virtual GIF supported Sep 9 00:20:16.434527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:20:16.457579 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:20:16.495211 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:20:16.506790 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:20:16.508739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:20:16.597778 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:20:16.633927 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:20:16.635502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:20:16.636641 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:20:16.637903 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:20:16.639139 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:20:16.640619 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:20:16.641829 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:20:16.643040 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:20:16.644233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:20:16.644260 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:20:16.645140 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:20:16.646950 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:20:16.649839 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:20:16.664566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:20:16.667216 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:20:16.669001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:20:16.670210 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:20:16.671228 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:20:16.672212 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:20:16.672251 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:20:16.673675 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:20:16.676122 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:20:16.679732 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:20:16.681154 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:20:16.683655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:20:16.684777 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:20:16.687211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:20:16.692285 jq[1431]: false Sep 9 00:20:16.692368 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:20:16.697709 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:20:16.701841 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:20:16.703521 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:20:16.703978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:20:16.705582 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:20:16.710984 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:20:16.716510 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:20:16.727202 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:20:16.727434 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:20:16.727820 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:20:16.728050 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:20:16.728682 update_engine[1439]: I20250909 00:20:16.728607 1439 main.cc:92] Flatcar Update Engine starting Sep 9 00:20:16.729652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:20:16.729902 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:20:16.731040 extend-filesystems[1432]: Found loop3 Sep 9 00:20:16.732101 extend-filesystems[1432]: Found loop4 Sep 9 00:20:16.732101 extend-filesystems[1432]: Found loop5 Sep 9 00:20:16.732101 extend-filesystems[1432]: Found sr0 Sep 9 00:20:16.732101 extend-filesystems[1432]: Found vda Sep 9 00:20:16.736638 jq[1441]: true Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda1 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda2 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda3 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found usr Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda4 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda6 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda7 Sep 9 00:20:16.736888 extend-filesystems[1432]: Found vda9 Sep 9 00:20:16.736888 extend-filesystems[1432]: Checking size of /dev/vda9 Sep 9 00:20:16.746513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:20:16.745235 dbus-daemon[1430]: [system] SELinux support is enabled Sep 9 00:20:16.754820 update_engine[1439]: I20250909 00:20:16.749857 1439 update_check_scheduler.cc:74] Next update check in 5m33s Sep 9 00:20:16.755455 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:20:16.758565 extend-filesystems[1432]: Resized partition /dev/vda9 Sep 9 00:20:16.759483 jq[1451]: true Sep 9 00:20:16.768575 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:20:16.777186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Sep 9 00:20:16.772640 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:20:16.779578 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:20:16.780527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:20:16.780658 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:20:16.782704 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:20:16.782730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:20:16.790618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:20:16.794252 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:20:16.794290 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:20:16.799780 systemd-logind[1437]: New seat seat0. Sep 9 00:20:16.807306 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:20:16.808575 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:20:16.832070 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:20:16.832070 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:20:16.832070 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:20:16.838858 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Sep 9 00:20:16.834928 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:20:16.835255 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:20:16.967503 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:20:16.993667 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:20:17.026274 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:20:17.036024 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:20:17.046762 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:20:17.047145 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:20:17.057800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:20:17.130780 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:20:17.174967 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:20:17.272679 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:20:17.314571 containerd[1453]: time="2025-09-09T00:20:17.314452147Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:20:17.353285 containerd[1453]: time="2025-09-09T00:20:17.353202323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.355527 containerd[1453]: time="2025-09-09T00:20:17.355476551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:17.355527 containerd[1453]: time="2025-09-09T00:20:17.355507744Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:20:17.355527 containerd[1453]: time="2025-09-09T00:20:17.355523813Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:20:17.355963 containerd[1453]: time="2025-09-09T00:20:17.355933364Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:20:17.355963 containerd[1453]: time="2025-09-09T00:20:17.355959146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356080 containerd[1453]: time="2025-09-09T00:20:17.356054379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356080 containerd[1453]: time="2025-09-09T00:20:17.356073478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356316 containerd[1453]: time="2025-09-09T00:20:17.356290499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356316 containerd[1453]: time="2025-09-09T00:20:17.356308114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356368 containerd[1453]: time="2025-09-09T00:20:17.356334404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356368 containerd[1453]: time="2025-09-09T00:20:17.356345713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356471 containerd[1453]: time="2025-09-09T00:20:17.356454351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356787 containerd[1453]: time="2025-09-09T00:20:17.356760533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356929 containerd[1453]: time="2025-09-09T00:20:17.356902244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:20:17.356929 containerd[1453]: time="2025-09-09T00:20:17.356921751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:20:17.357122 containerd[1453]: time="2025-09-09T00:20:17.357094989Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:20:17.357188 containerd[1453]: time="2025-09-09T00:20:17.357171987Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:20:17.363057 containerd[1453]: time="2025-09-09T00:20:17.362784944Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:20:17.363265 containerd[1453]: time="2025-09-09T00:20:17.363222087Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:20:17.363646 containerd[1453]: time="2025-09-09T00:20:17.363265097Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:20:17.363646 containerd[1453]: time="2025-09-09T00:20:17.363411933Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:20:17.363646 containerd[1453]: time="2025-09-09T00:20:17.363443440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:20:17.365532 containerd[1453]: time="2025-09-09T00:20:17.365492940Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:20:17.365902 containerd[1453]: time="2025-09-09T00:20:17.365879558Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:20:17.366102 containerd[1453]: time="2025-09-09T00:20:17.366072465Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:20:17.366140 containerd[1453]: time="2025-09-09T00:20:17.366115953Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:20:17.366176 containerd[1453]: time="2025-09-09T00:20:17.366136659Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:20:17.366176 containerd[1453]: time="2025-09-09T00:20:17.366154427Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366176 containerd[1453]: time="2025-09-09T00:20:17.366169662Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366260 containerd[1453]: time="2025-09-09T00:20:17.366184592Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366260 containerd[1453]: time="2025-09-09T00:20:17.366203955Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366260 containerd[1453]: time="2025-09-09T00:20:17.366222282Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366260 containerd[1453]: time="2025-09-09T00:20:17.366239591Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366260 containerd[1453]: time="2025-09-09T00:20:17.366255731Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366381 containerd[1453]: time="2025-09-09T00:20:17.366271150Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:20:17.366381 containerd[1453]: time="2025-09-09T00:20:17.366316589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366381 containerd[1453]: time="2025-09-09T00:20:17.366337408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366381 containerd[1453]: time="2025-09-09T00:20:17.366355654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366387791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366406148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366438062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366455361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366472213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366502530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366524193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366572481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366590503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366605798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366626932Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366655052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366686742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.366725 containerd[1453]: time="2025-09-09T00:20:17.366702211Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366758890Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366781732Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366795817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366809710Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366821660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366836325Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366861435Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:20:17.367074 containerd[1453]: time="2025-09-09T00:20:17.366876161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:20:17.367378 containerd[1453]: time="2025-09-09T00:20:17.367307457Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:20:17.367378 containerd[1453]: time="2025-09-09T00:20:17.367382798Z" level=info msg="Connect containerd service" Sep 9 00:20:17.367719 containerd[1453]: time="2025-09-09T00:20:17.367423295Z" level=info msg="using legacy CRI server" Sep 9 00:20:17.367719 containerd[1453]: time="2025-09-09T00:20:17.367433048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:20:17.367719 containerd[1453]: time="2025-09-09T00:20:17.367628966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:20:17.368543 containerd[1453]: time="2025-09-09T00:20:17.368505134Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:20:17.368747 containerd[1453]: time="2025-09-09T00:20:17.368696639Z" level=info msg="Start subscribing containerd event" Sep 9 00:20:17.368808 containerd[1453]: time="2025-09-09T00:20:17.368762551Z" level=info msg="Start recovering state" Sep 9 00:20:17.368863 containerd[1453]: time="2025-09-09T00:20:17.368844076Z" level=info msg="Start event monitor" Sep 9 00:20:17.368893 containerd[1453]: time="2025-09-09T00:20:17.368871607Z" level=info msg="Start snapshots syncer" Sep 9 00:20:17.368893 containerd[1453]: time="2025-09-09T00:20:17.368882946Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:20:17.368948 containerd[1453]: time="2025-09-09T00:20:17.368893350Z" level=info msg="Start streaming server" Sep 9 00:20:17.369001 containerd[1453]: time="2025-09-09T00:20:17.368979613Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:20:17.369200 containerd[1453]: time="2025-09-09T00:20:17.369049066Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:20:17.369263 containerd[1453]: time="2025-09-09T00:20:17.369245359Z" level=info msg="containerd successfully booted in 0.056099s" Sep 9 00:20:17.385050 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:20:17.386502 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:20:17.388050 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:20:17.389826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:20:17.396195 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:20:17.685723 systemd-networkd[1376]: eth0: Gained IPv6LL Sep 9 00:20:17.689533 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:20:17.695858 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:20:17.706954 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:20:17.710249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:20:17.713214 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:20:17.739258 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:20:17.739607 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:20:17.741662 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:20:17.744022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:20:19.538789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:19.540833 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:20:19.542081 systemd[1]: Startup finished in 1.130s (kernel) + 6.857s (initrd) + 6.001s (userspace) = 13.989s. Sep 9 00:20:19.549845 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:20:20.437931 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:20:20.448715 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:52534.service - OpenSSH per-connection server daemon (10.0.0.1:52534). Sep 9 00:20:20.469658 kubelet[1535]: E0909 00:20:20.469481 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:20:20.474245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:20:20.474481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:20:20.474865 systemd[1]: kubelet.service: Consumed 2.440s CPU time. Sep 9 00:20:20.496614 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 52534 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:20.498844 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:20.508677 systemd-logind[1437]: New session 1 of user core. Sep 9 00:20:20.510094 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:20:20.527929 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:20:20.541122 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:20:20.543959 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:20:20.552541 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:20:20.669737 systemd[1552]: Queued start job for default target default.target. Sep 9 00:20:20.686239 systemd[1552]: Created slice app.slice - User Application Slice. Sep 9 00:20:20.686269 systemd[1552]: Reached target paths.target - Paths. Sep 9 00:20:20.686285 systemd[1552]: Reached target timers.target - Timers. Sep 9 00:20:20.688355 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:20:20.703655 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:20:20.703839 systemd[1552]: Reached target sockets.target - Sockets. Sep 9 00:20:20.703858 systemd[1552]: Reached target basic.target - Basic System. Sep 9 00:20:20.703909 systemd[1552]: Reached target default.target - Main User Target. Sep 9 00:20:20.703961 systemd[1552]: Startup finished in 144ms. Sep 9 00:20:20.704202 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:20:20.714750 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:20:20.781316 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:52540.service - OpenSSH per-connection server daemon (10.0.0.1:52540). Sep 9 00:20:20.822130 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 52540 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:20.823978 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:20.828692 systemd-logind[1437]: New session 2 of user core. Sep 9 00:20:20.837748 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:20:20.896209 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:20.904081 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:52540.service: Deactivated successfully. Sep 9 00:20:20.906455 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:20:20.908608 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:20:20.921437 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:52546.service - OpenSSH per-connection server daemon (10.0.0.1:52546). Sep 9 00:20:20.922619 systemd-logind[1437]: Removed session 2. Sep 9 00:20:20.957979 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 52546 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:20.960045 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:20.964703 systemd-logind[1437]: New session 3 of user core. Sep 9 00:20:20.978703 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:20:21.031284 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:21.051498 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:52546.service: Deactivated successfully. Sep 9 00:20:21.054026 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:20:21.055844 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:20:21.057717 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). Sep 9 00:20:21.059023 systemd-logind[1437]: Removed session 3. Sep 9 00:20:21.095965 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:21.097946 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:21.102602 systemd-logind[1437]: New session 4 of user core. Sep 9 00:20:21.113754 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:20:21.171521 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:21.186641 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:52548.service: Deactivated successfully. Sep 9 00:20:21.188506 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:20:21.190062 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:20:21.196882 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:52560.service - OpenSSH per-connection server daemon (10.0.0.1:52560). Sep 9 00:20:21.197865 systemd-logind[1437]: Removed session 4. Sep 9 00:20:21.230335 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 52560 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:21.232494 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:21.236837 systemd-logind[1437]: New session 5 of user core. Sep 9 00:20:21.247695 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:20:21.308805 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:20:21.309151 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:20:21.328476 sudo[1587]: pam_unix(sudo:session): session closed for user root Sep 9 00:20:21.330703 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:21.342800 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:52560.service: Deactivated successfully. Sep 9 00:20:21.344637 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:20:21.346433 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:20:21.347974 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:52576.service - OpenSSH per-connection server daemon (10.0.0.1:52576). Sep 9 00:20:21.348916 systemd-logind[1437]: Removed session 5. Sep 9 00:20:21.410365 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 52576 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:21.412661 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:21.417615 systemd-logind[1437]: New session 6 of user core. Sep 9 00:20:21.424708 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:20:21.483243 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:20:21.483694 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:20:21.489053 sudo[1596]: pam_unix(sudo:session): session closed for user root Sep 9 00:20:21.497753 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:20:21.498203 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:20:21.518081 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:20:21.520352 auditctl[1599]: No rules Sep 9 00:20:21.522122 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:20:21.522607 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:20:21.524995 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:20:21.562528 augenrules[1617]: No rules Sep 9 00:20:21.564691 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:20:21.566027 sudo[1595]: pam_unix(sudo:session): session closed for user root Sep 9 00:20:21.569314 sshd[1592]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:21.584144 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:52576.service: Deactivated successfully. Sep 9 00:20:21.586816 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:20:21.588928 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:20:21.599977 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:52590.service - OpenSSH per-connection server daemon (10.0.0.1:52590). Sep 9 00:20:21.601119 systemd-logind[1437]: Removed session 6. Sep 9 00:20:21.635039 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 52590 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:21.637171 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:21.642294 systemd-logind[1437]: New session 7 of user core. Sep 9 00:20:21.651844 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:20:21.708753 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:20:21.709143 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:20:21.741164 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:20:21.764835 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:20:21.765166 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:20:22.291808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:22.292147 systemd[1]: kubelet.service: Consumed 2.440s CPU time. Sep 9 00:20:22.307015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:20:22.338244 systemd[1]: Reloading requested from client PID 1672 ('systemctl') (unit session-7.scope)... Sep 9 00:20:22.338267 systemd[1]: Reloading... Sep 9 00:20:22.687597 zram_generator::config[1708]: No configuration found. Sep 9 00:20:22.861061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:20:22.984519 systemd[1]: Reloading finished in 645 ms. Sep 9 00:20:23.074073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:20:23.074179 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:20:23.074495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:23.076573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:20:23.343909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:20:23.350844 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:20:23.432485 kubelet[1758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:20:23.432485 kubelet[1758]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:20:23.432485 kubelet[1758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:20:23.432485 kubelet[1758]: I0909 00:20:23.431959 1758 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:20:24.222197 kubelet[1758]: I0909 00:20:24.222126 1758 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:20:24.222197 kubelet[1758]: I0909 00:20:24.222167 1758 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:20:24.222468 kubelet[1758]: I0909 00:20:24.222435 1758 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:20:24.249170 kubelet[1758]: I0909 00:20:24.249113 1758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:20:24.260660 kubelet[1758]: E0909 00:20:24.260603 1758 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:20:24.260660 kubelet[1758]: I0909 00:20:24.260648 1758 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:20:24.635698 kubelet[1758]: I0909 00:20:24.635481 1758 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:20:24.636310 kubelet[1758]: I0909 00:20:24.636114 1758 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:20:24.636510 kubelet[1758]: I0909 00:20:24.636219 1758 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:20:24.636725 kubelet[1758]: I0909 00:20:24.636522 1758 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:20:24.636725 kubelet[1758]: I0909 00:20:24.636539 1758 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:20:24.636847 kubelet[1758]: I0909 00:20:24.636829 1758 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:20:24.639901 kubelet[1758]: I0909 00:20:24.639830 1758 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:20:24.639901 kubelet[1758]: I0909 00:20:24.639858 1758 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:20:24.639901 kubelet[1758]: I0909 00:20:24.639902 1758 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:20:24.641618 kubelet[1758]: I0909 00:20:24.641349 1758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:20:24.641618 kubelet[1758]: E0909 00:20:24.641384 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:24.641618 kubelet[1758]: E0909 00:20:24.641410 1758 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:24.645750 kubelet[1758]: I0909 00:20:24.645716 1758 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:20:24.647208 kubelet[1758]: I0909 00:20:24.646807 1758 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:20:24.648361 kubelet[1758]: W0909 00:20:24.648322 1758 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:20:24.651884 kubelet[1758]: I0909 00:20:24.651854 1758 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:20:24.651957 kubelet[1758]: I0909 00:20:24.651937 1758 server.go:1289] "Started kubelet" Sep 9 00:20:24.654395 kubelet[1758]: I0909 00:20:24.653279 1758 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:20:24.654395 kubelet[1758]: I0909 00:20:24.653506 1758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:20:24.654395 kubelet[1758]: I0909 00:20:24.654133 1758 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:20:24.654499 kubelet[1758]: I0909 00:20:24.654450 1758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:20:24.654676 kubelet[1758]: I0909 00:20:24.654648 1758 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:20:24.656741 kubelet[1758]: I0909 00:20:24.656467 1758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:20:24.658443 kubelet[1758]: E0909 00:20:24.658424 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:24.659077 kubelet[1758]: I0909 00:20:24.658838 1758 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:20:24.659262 kubelet[1758]: I0909 00:20:24.658862 1758 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:20:24.659386 kubelet[1758]: I0909 00:20:24.659371 1758 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:20:24.659602 kubelet[1758]: I0909 00:20:24.659571 1758 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:20:24.659836 kubelet[1758]: I0909 00:20:24.659724 1758 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:20:24.659836 kubelet[1758]: E0909 00:20:24.659798 1758 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:20:24.663600 kubelet[1758]: I0909 00:20:24.662903 1758 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:20:24.709451 kubelet[1758]: I0909 00:20:24.709404 1758 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:20:24.709451 kubelet[1758]: I0909 00:20:24.709433 1758 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:20:24.709451 kubelet[1758]: I0909 00:20:24.709455 1758 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:20:24.760074 kubelet[1758]: E0909 00:20:24.760034 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:24.861079 kubelet[1758]: E0909 00:20:24.861025 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:24.882354 kubelet[1758]: E0909 00:20:24.882307 1758 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.26\" not found" node="10.0.0.26" Sep 9 00:20:24.961260 kubelet[1758]: E0909 00:20:24.961195 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.061858 kubelet[1758]: E0909 00:20:25.061798 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.162664 kubelet[1758]: E0909 00:20:25.162591 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.224379 kubelet[1758]: I0909 00:20:25.224167 1758 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 9 00:20:25.224560 kubelet[1758]: I0909 00:20:25.224507 1758 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:20:25.224630 kubelet[1758]: I0909 00:20:25.224528 1758 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:20:25.224731 kubelet[1758]: I0909 00:20:25.224609 1758 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 00:20:25.262846 kubelet[1758]: E0909 00:20:25.262818 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.363778 kubelet[1758]: E0909 00:20:25.363714 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.464401 kubelet[1758]: E0909 00:20:25.464341 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.565320 kubelet[1758]: E0909 00:20:25.565136 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.567730 kubelet[1758]: E0909 00:20:25.567691 1758 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.26" not found Sep 9 00:20:25.592469 kubelet[1758]: E0909 00:20:25.224660 1758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.26:41404->10.0.0.15:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.26.186375465f1eacca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.26,UID:10.0.0.26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.26 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.26,},FirstTimestamp:2025-09-09 00:20:24.708631754 +0000 UTC m=+1.345547217,LastTimestamp:2025-09-09 00:20:24.708631754 +0000 UTC m=+1.345547217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.26,}" Sep 9 00:20:25.642526 kubelet[1758]: I0909 00:20:25.642479 1758 apiserver.go:52] "Watching apiserver" Sep 9 00:20:25.643113 kubelet[1758]: E0909 00:20:25.642585 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:25.665677 kubelet[1758]: E0909 00:20:25.665645 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:25.666047 kubelet[1758]: I0909 00:20:25.666015 1758 policy_none.go:49] "None policy: Start" Sep 9 00:20:25.666047 kubelet[1758]: I0909 00:20:25.666048 1758 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:20:25.666111 kubelet[1758]: I0909 00:20:25.666072 1758 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:20:25.676978 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:20:25.687961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:20:25.700142 kubelet[1758]: I0909 00:20:25.700085 1758 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:20:25.701701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:20:25.702524 kubelet[1758]: I0909 00:20:25.702466 1758 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:20:25.702599 kubelet[1758]: I0909 00:20:25.702564 1758 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:20:25.702645 kubelet[1758]: I0909 00:20:25.702621 1758 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:20:25.702645 kubelet[1758]: I0909 00:20:25.702642 1758 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:20:25.702836 kubelet[1758]: E0909 00:20:25.702793 1758 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:20:25.704131 kubelet[1758]: E0909 00:20:25.703552 1758 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:20:25.704131 kubelet[1758]: I0909 00:20:25.703781 1758 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:20:25.704131 kubelet[1758]: I0909 00:20:25.703800 1758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:20:25.704284 kubelet[1758]: I0909 00:20:25.704271 1758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:20:25.705403 kubelet[1758]: E0909 00:20:25.705381 1758 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:20:25.705586 kubelet[1758]: E0909 00:20:25.705535 1758 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.26\" not found" Sep 9 00:20:25.806027 kubelet[1758]: I0909 00:20:25.805982 1758 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.26" Sep 9 00:20:25.860430 kubelet[1758]: I0909 00:20:25.860267 1758 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:20:26.088245 kubelet[1758]: I0909 00:20:26.088197 1758 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.26" Sep 9 00:20:26.088245 kubelet[1758]: E0909 00:20:26.088240 1758 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.26\": node \"10.0.0.26\" not found" Sep 9 00:20:26.214974 kubelet[1758]: E0909 00:20:26.214912 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:26.218116 sudo[1628]: pam_unix(sudo:session): session closed for user root Sep 9 00:20:26.220388 sshd[1625]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:26.223442 kubelet[1758]: E0909 00:20:26.223404 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:26.226437 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:20:26.227177 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:52590.service: Deactivated successfully. Sep 9 00:20:26.230952 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:20:26.232302 systemd-logind[1437]: Removed session 7. Sep 9 00:20:26.239216 systemd[1]: Created slice kubepods-besteffort-podbbc0d169_3de1_4b71_ae49_2ab044be019d.slice - libcontainer container kubepods-besteffort-podbbc0d169_3de1_4b71_ae49_2ab044be019d.slice. Sep 9 00:20:26.253240 systemd[1]: Created slice kubepods-besteffort-pod51e3b28a_7154_4c18_962c_1e6d5c34bd74.slice - libcontainer container kubepods-besteffort-pod51e3b28a_7154_4c18_962c_1e6d5c34bd74.slice. Sep 9 00:20:26.269748 kubelet[1758]: I0909 00:20:26.269672 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e396a1a1-1baa-4688-8782-5ce8aaab6921-varrun\") pod \"csi-node-driver-sttrq\" (UID: \"e396a1a1-1baa-4688-8782-5ce8aaab6921\") " pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:26.269748 kubelet[1758]: I0909 00:20:26.269732 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-flexvol-driver-host\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.269748 kubelet[1758]: I0909 00:20:26.269755 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-policysync\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.269993 kubelet[1758]: I0909 00:20:26.269775 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51e3b28a-7154-4c18-962c-1e6d5c34bd74-tigera-ca-bundle\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.269993 kubelet[1758]: I0909 00:20:26.269795 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-var-lib-calico\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.269993 kubelet[1758]: I0909 00:20:26.269845 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-xtables-lock\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.269993 kubelet[1758]: I0909 00:20:26.269920 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbc0d169-3de1-4b71-ae49-2ab044be019d-lib-modules\") pod \"kube-proxy-9lrz9\" (UID: \"bbc0d169-3de1-4b71-ae49-2ab044be019d\") " pod="kube-system/kube-proxy-9lrz9" Sep 9 00:20:26.269993 kubelet[1758]: I0909 00:20:26.269961 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7655\" (UniqueName: \"kubernetes.io/projected/bbc0d169-3de1-4b71-ae49-2ab044be019d-kube-api-access-r7655\") pod \"kube-proxy-9lrz9\" (UID: \"bbc0d169-3de1-4b71-ae49-2ab044be019d\") " pod="kube-system/kube-proxy-9lrz9" Sep 9 00:20:26.270156 kubelet[1758]: I0909 00:20:26.270049 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-lib-modules\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270156 kubelet[1758]: I0909 00:20:26.270138 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/51e3b28a-7154-4c18-962c-1e6d5c34bd74-node-certs\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270230 kubelet[1758]: I0909 00:20:26.270187 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475wx\" (UniqueName: \"kubernetes.io/projected/51e3b28a-7154-4c18-962c-1e6d5c34bd74-kube-api-access-475wx\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270230 kubelet[1758]: I0909 00:20:26.270213 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbc0d169-3de1-4b71-ae49-2ab044be019d-kube-proxy\") pod \"kube-proxy-9lrz9\" (UID: \"bbc0d169-3de1-4b71-ae49-2ab044be019d\") " pod="kube-system/kube-proxy-9lrz9" Sep 9 00:20:26.270320 kubelet[1758]: I0909 00:20:26.270232 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-var-run-calico\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270320 kubelet[1758]: I0909 00:20:26.270248 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e396a1a1-1baa-4688-8782-5ce8aaab6921-registration-dir\") pod \"csi-node-driver-sttrq\" (UID: \"e396a1a1-1baa-4688-8782-5ce8aaab6921\") " pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:26.270320 kubelet[1758]: I0909 00:20:26.270261 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e396a1a1-1baa-4688-8782-5ce8aaab6921-socket-dir\") pod \"csi-node-driver-sttrq\" (UID: \"e396a1a1-1baa-4688-8782-5ce8aaab6921\") " pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:26.270320 kubelet[1758]: I0909 00:20:26.270286 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdh9\" (UniqueName: \"kubernetes.io/projected/e396a1a1-1baa-4688-8782-5ce8aaab6921-kube-api-access-kqdh9\") pod \"csi-node-driver-sttrq\" (UID: \"e396a1a1-1baa-4688-8782-5ce8aaab6921\") " pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:26.270457 kubelet[1758]: I0909 00:20:26.270320 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbc0d169-3de1-4b71-ae49-2ab044be019d-xtables-lock\") pod \"kube-proxy-9lrz9\" (UID: \"bbc0d169-3de1-4b71-ae49-2ab044be019d\") " pod="kube-system/kube-proxy-9lrz9" Sep 9 00:20:26.270457 kubelet[1758]: I0909 00:20:26.270351 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-cni-bin-dir\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270457 kubelet[1758]: I0909 00:20:26.270372 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-cni-log-dir\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270457 kubelet[1758]: I0909 00:20:26.270393 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/51e3b28a-7154-4c18-962c-1e6d5c34bd74-cni-net-dir\") pod \"calico-node-f9rr7\" (UID: \"51e3b28a-7154-4c18-962c-1e6d5c34bd74\") " pod="calico-system/calico-node-f9rr7" Sep 9 00:20:26.270457 kubelet[1758]: I0909 00:20:26.270440 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e396a1a1-1baa-4688-8782-5ce8aaab6921-kubelet-dir\") pod \"csi-node-driver-sttrq\" (UID: \"e396a1a1-1baa-4688-8782-5ce8aaab6921\") " pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:26.324150 kubelet[1758]: E0909 00:20:26.324102 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:26.373240 kubelet[1758]: E0909 00:20:26.373103 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:26.373240 kubelet[1758]: W0909 00:20:26.373133 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:26.373240 kubelet[1758]: E0909 00:20:26.373157 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:26.377046 kubelet[1758]: E0909 00:20:26.377021 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:26.377046 kubelet[1758]: W0909 00:20:26.377037 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:26.377146 kubelet[1758]: E0909 00:20:26.377051 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:26.381871 kubelet[1758]: E0909 00:20:26.381848 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:26.381871 kubelet[1758]: W0909 00:20:26.381864 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:26.381871 kubelet[1758]: E0909 00:20:26.381877 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:26.383422 kubelet[1758]: E0909 00:20:26.383357 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:26.383422 kubelet[1758]: W0909 00:20:26.383387 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:26.383422 kubelet[1758]: E0909 00:20:26.383400 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:26.385505 kubelet[1758]: E0909 00:20:26.385487 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:26.385505 kubelet[1758]: W0909 00:20:26.385499 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:26.386611 kubelet[1758]: E0909 00:20:26.385510 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:26.424670 kubelet[1758]: E0909 00:20:26.424594 1758 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.26\" not found" Sep 9 00:20:26.526164 kubelet[1758]: I0909 00:20:26.526020 1758 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 9 00:20:26.527090 containerd[1453]: time="2025-09-09T00:20:26.526929756Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:20:26.527657 kubelet[1758]: I0909 00:20:26.527290 1758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 9 00:20:26.550632 kubelet[1758]: E0909 00:20:26.550581 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:26.551532 containerd[1453]: time="2025-09-09T00:20:26.551467432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lrz9,Uid:bbc0d169-3de1-4b71-ae49-2ab044be019d,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:26.557347 containerd[1453]: time="2025-09-09T00:20:26.557276435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f9rr7,Uid:51e3b28a-7154-4c18-962c-1e6d5c34bd74,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:26.643753 kubelet[1758]: E0909 00:20:26.643677 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:27.526192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338793761.mount: Deactivated successfully. Sep 9 00:20:27.536988 containerd[1453]: time="2025-09-09T00:20:27.536931945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:20:27.538105 containerd[1453]: time="2025-09-09T00:20:27.538073889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:20:27.538664 containerd[1453]: time="2025-09-09T00:20:27.538616692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:20:27.539584 containerd[1453]: time="2025-09-09T00:20:27.539525085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:20:27.540673 containerd[1453]: time="2025-09-09T00:20:27.540633031Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:20:27.544780 containerd[1453]: time="2025-09-09T00:20:27.543904193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 992.205908ms" Sep 9 00:20:27.545785 containerd[1453]: time="2025-09-09T00:20:27.545754302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 988.379866ms" Sep 9 00:20:27.546401 containerd[1453]: time="2025-09-09T00:20:27.546361853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:20:27.644647 kubelet[1758]: E0909 00:20:27.644573 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:27.704111 kubelet[1758]: E0909 00:20:27.704038 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:28.100277 containerd[1453]: time="2025-09-09T00:20:28.099964818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:28.100277 containerd[1453]: time="2025-09-09T00:20:28.100051832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:28.100277 containerd[1453]: time="2025-09-09T00:20:28.100066581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:28.100277 containerd[1453]: time="2025-09-09T00:20:28.100175612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:28.100277 containerd[1453]: time="2025-09-09T00:20:28.100235271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:28.100615 containerd[1453]: time="2025-09-09T00:20:28.100283046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:28.100615 containerd[1453]: time="2025-09-09T00:20:28.100303193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:28.100615 containerd[1453]: time="2025-09-09T00:20:28.100379038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:28.186686 systemd[1]: Started cri-containerd-b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1.scope - libcontainer container b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1. Sep 9 00:20:28.191349 systemd[1]: Started cri-containerd-c2bdb6a89980c74072bea601dae5560bb9b2fcf3ef5276506f300103506d3777.scope - libcontainer container c2bdb6a89980c74072bea601dae5560bb9b2fcf3ef5276506f300103506d3777. Sep 9 00:20:28.244084 containerd[1453]: time="2025-09-09T00:20:28.244034867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lrz9,Uid:bbc0d169-3de1-4b71-ae49-2ab044be019d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2bdb6a89980c74072bea601dae5560bb9b2fcf3ef5276506f300103506d3777\"" Sep 9 00:20:28.244084 containerd[1453]: time="2025-09-09T00:20:28.244067933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f9rr7,Uid:51e3b28a-7154-4c18-962c-1e6d5c34bd74,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\"" Sep 9 00:20:28.247136 kubelet[1758]: E0909 00:20:28.246513 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:28.249944 containerd[1453]: time="2025-09-09T00:20:28.249915770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:20:28.645381 kubelet[1758]: E0909 00:20:28.645323 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:29.646080 kubelet[1758]: E0909 00:20:29.646020 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:29.703834 kubelet[1758]: E0909 00:20:29.703767 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:30.646216 kubelet[1758]: E0909 00:20:30.646164 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:31.236830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3103778623.mount: Deactivated successfully. Sep 9 00:20:31.647522 kubelet[1758]: E0909 00:20:31.647339 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:31.704036 kubelet[1758]: E0909 00:20:31.703968 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:31.894912 containerd[1453]: time="2025-09-09T00:20:31.894839510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:31.895626 containerd[1453]: time="2025-09-09T00:20:31.895568257Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:20:31.897470 containerd[1453]: time="2025-09-09T00:20:31.897417918Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:31.900179 containerd[1453]: time="2025-09-09T00:20:31.900035992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:31.900844 containerd[1453]: time="2025-09-09T00:20:31.900811857Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.650864909s" Sep 9 00:20:31.900844 containerd[1453]: time="2025-09-09T00:20:31.900846743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:20:31.902153 containerd[1453]: time="2025-09-09T00:20:31.902115045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:20:31.906866 containerd[1453]: time="2025-09-09T00:20:31.906806880Z" level=info msg="CreateContainer within sandbox \"c2bdb6a89980c74072bea601dae5560bb9b2fcf3ef5276506f300103506d3777\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:20:31.927381 containerd[1453]: time="2025-09-09T00:20:31.927317626Z" level=info msg="CreateContainer within sandbox \"c2bdb6a89980c74072bea601dae5560bb9b2fcf3ef5276506f300103506d3777\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6cd23ca06df36c19aeaf4b25e9780460eb7dfaf608711d3e850d54f38c0f437\"" Sep 9 00:20:31.928247 containerd[1453]: time="2025-09-09T00:20:31.928204426Z" level=info msg="StartContainer for \"a6cd23ca06df36c19aeaf4b25e9780460eb7dfaf608711d3e850d54f38c0f437\"" Sep 9 00:20:31.992713 systemd[1]: Started cri-containerd-a6cd23ca06df36c19aeaf4b25e9780460eb7dfaf608711d3e850d54f38c0f437.scope - libcontainer container a6cd23ca06df36c19aeaf4b25e9780460eb7dfaf608711d3e850d54f38c0f437. Sep 9 00:20:32.144481 containerd[1453]: time="2025-09-09T00:20:32.144424383Z" level=info msg="StartContainer for \"a6cd23ca06df36c19aeaf4b25e9780460eb7dfaf608711d3e850d54f38c0f437\" returns successfully" Sep 9 00:20:32.648033 kubelet[1758]: E0909 00:20:32.647974 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:32.846064 kubelet[1758]: E0909 00:20:32.846015 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:32.917506 kubelet[1758]: E0909 00:20:32.917331 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.917506 kubelet[1758]: W0909 00:20:32.917368 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.917506 kubelet[1758]: E0909 00:20:32.917398 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.917931 kubelet[1758]: E0909 00:20:32.917904 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.917931 kubelet[1758]: W0909 00:20:32.917918 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.917931 kubelet[1758]: E0909 00:20:32.917929 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.918287 kubelet[1758]: E0909 00:20:32.918273 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.918287 kubelet[1758]: W0909 00:20:32.918285 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.918366 kubelet[1758]: E0909 00:20:32.918294 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.918615 kubelet[1758]: E0909 00:20:32.918599 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.918615 kubelet[1758]: W0909 00:20:32.918611 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.918707 kubelet[1758]: E0909 00:20:32.918621 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.919046 kubelet[1758]: E0909 00:20:32.919021 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.919046 kubelet[1758]: W0909 00:20:32.919033 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.919046 kubelet[1758]: E0909 00:20:32.919044 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.919308 kubelet[1758]: E0909 00:20:32.919287 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.919308 kubelet[1758]: W0909 00:20:32.919298 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.919308 kubelet[1758]: E0909 00:20:32.919308 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.919575 kubelet[1758]: E0909 00:20:32.919561 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.919575 kubelet[1758]: W0909 00:20:32.919572 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.919644 kubelet[1758]: E0909 00:20:32.919581 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.919885 kubelet[1758]: E0909 00:20:32.919859 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.919885 kubelet[1758]: W0909 00:20:32.919871 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.919885 kubelet[1758]: E0909 00:20:32.919881 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.920156 kubelet[1758]: E0909 00:20:32.920141 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.920156 kubelet[1758]: W0909 00:20:32.920152 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.920221 kubelet[1758]: E0909 00:20:32.920163 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.920426 kubelet[1758]: E0909 00:20:32.920406 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.920426 kubelet[1758]: W0909 00:20:32.920417 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.920426 kubelet[1758]: E0909 00:20:32.920427 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.920712 kubelet[1758]: E0909 00:20:32.920697 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.920712 kubelet[1758]: W0909 00:20:32.920707 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.920805 kubelet[1758]: E0909 00:20:32.920716 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.920973 kubelet[1758]: E0909 00:20:32.920956 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.920973 kubelet[1758]: W0909 00:20:32.920967 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.921054 kubelet[1758]: E0909 00:20:32.920978 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.921295 kubelet[1758]: E0909 00:20:32.921280 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.921295 kubelet[1758]: W0909 00:20:32.921291 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.921359 kubelet[1758]: E0909 00:20:32.921302 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.921571 kubelet[1758]: E0909 00:20:32.921556 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.921571 kubelet[1758]: W0909 00:20:32.921568 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.921644 kubelet[1758]: E0909 00:20:32.921578 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.921834 kubelet[1758]: E0909 00:20:32.921818 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.921834 kubelet[1758]: W0909 00:20:32.921829 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.921892 kubelet[1758]: E0909 00:20:32.921838 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.922068 kubelet[1758]: E0909 00:20:32.922054 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.922068 kubelet[1758]: W0909 00:20:32.922064 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.922143 kubelet[1758]: E0909 00:20:32.922073 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.922433 kubelet[1758]: E0909 00:20:32.922385 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.922433 kubelet[1758]: W0909 00:20:32.922419 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.922518 kubelet[1758]: E0909 00:20:32.922450 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.922820 kubelet[1758]: E0909 00:20:32.922773 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.922820 kubelet[1758]: W0909 00:20:32.922788 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.922820 kubelet[1758]: E0909 00:20:32.922797 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.923063 kubelet[1758]: E0909 00:20:32.923044 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.923063 kubelet[1758]: W0909 00:20:32.923058 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.923126 kubelet[1758]: E0909 00:20:32.923092 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.923392 kubelet[1758]: E0909 00:20:32.923362 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.923392 kubelet[1758]: W0909 00:20:32.923381 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.923392 kubelet[1758]: E0909 00:20:32.923392 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.923807 kubelet[1758]: E0909 00:20:32.923777 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.923807 kubelet[1758]: W0909 00:20:32.923794 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.923807 kubelet[1758]: E0909 00:20:32.923805 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.924372 kubelet[1758]: E0909 00:20:32.924341 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.924372 kubelet[1758]: W0909 00:20:32.924355 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.924372 kubelet[1758]: E0909 00:20:32.924367 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.924945 kubelet[1758]: E0909 00:20:32.924896 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.924945 kubelet[1758]: W0909 00:20:32.924931 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.925024 kubelet[1758]: E0909 00:20:32.924960 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.925261 kubelet[1758]: E0909 00:20:32.925238 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.925261 kubelet[1758]: W0909 00:20:32.925253 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.925354 kubelet[1758]: E0909 00:20:32.925265 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.925595 kubelet[1758]: E0909 00:20:32.925573 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.925595 kubelet[1758]: W0909 00:20:32.925587 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.925595 kubelet[1758]: E0909 00:20:32.925599 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.925925 kubelet[1758]: E0909 00:20:32.925905 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.925925 kubelet[1758]: W0909 00:20:32.925920 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.925992 kubelet[1758]: E0909 00:20:32.925932 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.926202 kubelet[1758]: E0909 00:20:32.926184 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.926202 kubelet[1758]: W0909 00:20:32.926199 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.926255 kubelet[1758]: E0909 00:20:32.926210 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.926478 kubelet[1758]: E0909 00:20:32.926461 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.926478 kubelet[1758]: W0909 00:20:32.926475 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.926531 kubelet[1758]: E0909 00:20:32.926490 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.926860 kubelet[1758]: E0909 00:20:32.926834 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.926860 kubelet[1758]: W0909 00:20:32.926850 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.926998 kubelet[1758]: E0909 00:20:32.926862 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.927179 kubelet[1758]: E0909 00:20:32.927151 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.927179 kubelet[1758]: W0909 00:20:32.927167 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.927179 kubelet[1758]: E0909 00:20:32.927179 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.927562 kubelet[1758]: E0909 00:20:32.927522 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.927589 kubelet[1758]: W0909 00:20:32.927572 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.927615 kubelet[1758]: E0909 00:20:32.927587 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:32.927887 kubelet[1758]: E0909 00:20:32.927860 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:32.927887 kubelet[1758]: W0909 00:20:32.927877 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:32.927943 kubelet[1758]: E0909 00:20:32.927890 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.648460 kubelet[1758]: E0909 00:20:33.648370 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:33.703300 kubelet[1758]: E0909 00:20:33.703196 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:33.847908 kubelet[1758]: E0909 00:20:33.847858 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:33.929532 kubelet[1758]: E0909 00:20:33.929365 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.929532 kubelet[1758]: W0909 00:20:33.929391 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.929532 kubelet[1758]: E0909 00:20:33.929413 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.929884 kubelet[1758]: E0909 00:20:33.929852 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.929884 kubelet[1758]: W0909 00:20:33.929864 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.929884 kubelet[1758]: E0909 00:20:33.929873 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.930407 kubelet[1758]: E0909 00:20:33.930336 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.930407 kubelet[1758]: W0909 00:20:33.930377 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.930640 kubelet[1758]: E0909 00:20:33.930415 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.930818 kubelet[1758]: E0909 00:20:33.930803 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.930818 kubelet[1758]: W0909 00:20:33.930814 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.930888 kubelet[1758]: E0909 00:20:33.930825 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.931141 kubelet[1758]: E0909 00:20:33.931103 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.931141 kubelet[1758]: W0909 00:20:33.931124 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.931141 kubelet[1758]: E0909 00:20:33.931139 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.931454 kubelet[1758]: E0909 00:20:33.931437 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.931454 kubelet[1758]: W0909 00:20:33.931451 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.931514 kubelet[1758]: E0909 00:20:33.931462 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.931753 kubelet[1758]: E0909 00:20:33.931737 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.931753 kubelet[1758]: W0909 00:20:33.931750 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.931853 kubelet[1758]: E0909 00:20:33.931760 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.932017 kubelet[1758]: E0909 00:20:33.931999 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.932017 kubelet[1758]: W0909 00:20:33.932011 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.932086 kubelet[1758]: E0909 00:20:33.932021 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.932325 kubelet[1758]: E0909 00:20:33.932307 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.932325 kubelet[1758]: W0909 00:20:33.932322 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.932392 kubelet[1758]: E0909 00:20:33.932334 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.932611 kubelet[1758]: E0909 00:20:33.932595 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.932611 kubelet[1758]: W0909 00:20:33.932608 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.932678 kubelet[1758]: E0909 00:20:33.932618 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.932856 kubelet[1758]: E0909 00:20:33.932840 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.932856 kubelet[1758]: W0909 00:20:33.932853 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.932937 kubelet[1758]: E0909 00:20:33.932863 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.933127 kubelet[1758]: E0909 00:20:33.933110 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.933166 kubelet[1758]: W0909 00:20:33.933130 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.933166 kubelet[1758]: E0909 00:20:33.933140 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.933395 kubelet[1758]: E0909 00:20:33.933380 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.933395 kubelet[1758]: W0909 00:20:33.933392 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.933469 kubelet[1758]: E0909 00:20:33.933402 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.934040 kubelet[1758]: E0909 00:20:33.934013 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.934040 kubelet[1758]: W0909 00:20:33.934026 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.934040 kubelet[1758]: E0909 00:20:33.934037 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.934575 kubelet[1758]: E0909 00:20:33.934371 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.934575 kubelet[1758]: W0909 00:20:33.934573 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.934673 kubelet[1758]: E0909 00:20:33.934591 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.934878 kubelet[1758]: E0909 00:20:33.934861 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.934878 kubelet[1758]: W0909 00:20:33.934875 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.934931 kubelet[1758]: E0909 00:20:33.934887 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.935146 kubelet[1758]: E0909 00:20:33.935118 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.935146 kubelet[1758]: W0909 00:20:33.935134 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.935146 kubelet[1758]: E0909 00:20:33.935144 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.935372 kubelet[1758]: E0909 00:20:33.935356 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.935372 kubelet[1758]: W0909 00:20:33.935367 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.935424 kubelet[1758]: E0909 00:20:33.935375 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.935670 kubelet[1758]: E0909 00:20:33.935653 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.935670 kubelet[1758]: W0909 00:20:33.935667 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.935722 kubelet[1758]: E0909 00:20:33.935680 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.935916 kubelet[1758]: E0909 00:20:33.935899 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.935916 kubelet[1758]: W0909 00:20:33.935912 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.935966 kubelet[1758]: E0909 00:20:33.935923 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.936227 kubelet[1758]: E0909 00:20:33.936211 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.936227 kubelet[1758]: W0909 00:20:33.936225 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.936298 kubelet[1758]: E0909 00:20:33.936236 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.936504 kubelet[1758]: E0909 00:20:33.936481 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.936504 kubelet[1758]: W0909 00:20:33.936494 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.936504 kubelet[1758]: E0909 00:20:33.936504 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.936826 kubelet[1758]: E0909 00:20:33.936799 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.936826 kubelet[1758]: W0909 00:20:33.936814 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.936826 kubelet[1758]: E0909 00:20:33.936824 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.937054 kubelet[1758]: E0909 00:20:33.937038 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.937054 kubelet[1758]: W0909 00:20:33.937050 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.937182 kubelet[1758]: E0909 00:20:33.937060 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.937313 kubelet[1758]: E0909 00:20:33.937298 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.937313 kubelet[1758]: W0909 00:20:33.937310 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.937378 kubelet[1758]: E0909 00:20:33.937319 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.937653 kubelet[1758]: E0909 00:20:33.937633 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.937683 kubelet[1758]: W0909 00:20:33.937676 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.937705 kubelet[1758]: E0909 00:20:33.937689 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.938040 kubelet[1758]: E0909 00:20:33.938020 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.938040 kubelet[1758]: W0909 00:20:33.938037 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.938111 kubelet[1758]: E0909 00:20:33.938049 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.938311 kubelet[1758]: E0909 00:20:33.938294 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.938311 kubelet[1758]: W0909 00:20:33.938309 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.938368 kubelet[1758]: E0909 00:20:33.938320 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.938612 kubelet[1758]: E0909 00:20:33.938584 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.938612 kubelet[1758]: W0909 00:20:33.938599 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.938612 kubelet[1758]: E0909 00:20:33.938610 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.938889 kubelet[1758]: E0909 00:20:33.938866 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.938889 kubelet[1758]: W0909 00:20:33.938880 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.938889 kubelet[1758]: E0909 00:20:33.938891 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.939248 kubelet[1758]: E0909 00:20:33.939220 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.939248 kubelet[1758]: W0909 00:20:33.939236 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.939248 kubelet[1758]: E0909 00:20:33.939247 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:33.939475 kubelet[1758]: E0909 00:20:33.939459 1758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:33.939475 kubelet[1758]: W0909 00:20:33.939472 1758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:33.939590 kubelet[1758]: E0909 00:20:33.939482 1758 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:34.150665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566670577.mount: Deactivated successfully. Sep 9 00:20:34.242124 containerd[1453]: time="2025-09-09T00:20:34.242033535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.243655 containerd[1453]: time="2025-09-09T00:20:34.243567287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 9 00:20:34.245110 containerd[1453]: time="2025-09-09T00:20:34.245067524Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.248534 containerd[1453]: time="2025-09-09T00:20:34.248463125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.249457 containerd[1453]: time="2025-09-09T00:20:34.249363734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.347191185s" Sep 9 00:20:34.249457 containerd[1453]: time="2025-09-09T00:20:34.249429671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:20:34.261394 containerd[1453]: time="2025-09-09T00:20:34.261304966Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:20:34.348259 containerd[1453]: time="2025-09-09T00:20:34.348132409Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2\"" Sep 9 00:20:34.349374 containerd[1453]: time="2025-09-09T00:20:34.349313532Z" level=info msg="StartContainer for \"6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2\"" Sep 9 00:20:34.402721 systemd[1]: Started cri-containerd-6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2.scope - libcontainer container 6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2. Sep 9 00:20:34.588226 systemd[1]: cri-containerd-6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2.scope: Deactivated successfully. Sep 9 00:20:34.634016 containerd[1453]: time="2025-09-09T00:20:34.633914693Z" level=info msg="StartContainer for \"6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2\" returns successfully" Sep 9 00:20:34.649071 kubelet[1758]: E0909 00:20:34.649024 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:34.885852 kubelet[1758]: I0909 00:20:34.885630 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lrz9" podStartSLOduration=5.23291199 podStartE2EDuration="8.885601221s" podCreationTimestamp="2025-09-09 00:20:26 +0000 UTC" firstStartedPulling="2025-09-09 00:20:28.249186057 +0000 UTC m=+4.886101521" lastFinishedPulling="2025-09-09 00:20:31.901875289 +0000 UTC m=+8.538790752" observedRunningTime="2025-09-09 00:20:32.857629272 +0000 UTC m=+9.494544745" watchObservedRunningTime="2025-09-09 00:20:34.885601221 +0000 UTC m=+11.522516695" Sep 9 00:20:35.048361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2-rootfs.mount: Deactivated successfully. Sep 9 00:20:35.250501 containerd[1453]: time="2025-09-09T00:20:35.250392243Z" level=info msg="shim disconnected" id=6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2 namespace=k8s.io Sep 9 00:20:35.250501 containerd[1453]: time="2025-09-09T00:20:35.250490080Z" level=warning msg="cleaning up after shim disconnected" id=6f98763caf46af9260abd00f8d546ecf64cc10a1dc9547a8771a8849c57624b2 namespace=k8s.io Sep 9 00:20:35.250501 containerd[1453]: time="2025-09-09T00:20:35.250506995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:35.650135 kubelet[1758]: E0909 00:20:35.649907 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:35.703708 kubelet[1758]: E0909 00:20:35.703603 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:35.855273 containerd[1453]: time="2025-09-09T00:20:35.855219897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:20:36.650962 kubelet[1758]: E0909 00:20:36.650863 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:37.651564 kubelet[1758]: E0909 00:20:37.651488 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:37.703617 kubelet[1758]: E0909 00:20:37.703515 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:38.652427 kubelet[1758]: E0909 00:20:38.652331 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:39.653166 kubelet[1758]: E0909 00:20:39.653108 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:39.703901 kubelet[1758]: E0909 00:20:39.703813 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:40.430838 containerd[1453]: time="2025-09-09T00:20:40.430680346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.431659 containerd[1453]: time="2025-09-09T00:20:40.431590459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:20:40.433318 containerd[1453]: time="2025-09-09T00:20:40.433289703Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.437033 containerd[1453]: time="2025-09-09T00:20:40.436988265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.437888 containerd[1453]: time="2025-09-09T00:20:40.437836699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.582571441s" Sep 9 00:20:40.437888 containerd[1453]: time="2025-09-09T00:20:40.437889625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:20:40.444846 containerd[1453]: time="2025-09-09T00:20:40.444736721Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:20:40.464637 containerd[1453]: time="2025-09-09T00:20:40.464567353Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369\"" Sep 9 00:20:40.465427 containerd[1453]: time="2025-09-09T00:20:40.465378401Z" level=info msg="StartContainer for \"3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369\"" Sep 9 00:20:40.527962 systemd[1]: Started cri-containerd-3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369.scope - libcontainer container 3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369. Sep 9 00:20:40.570304 containerd[1453]: time="2025-09-09T00:20:40.570095861Z" level=info msg="StartContainer for \"3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369\" returns successfully" Sep 9 00:20:40.653333 kubelet[1758]: E0909 00:20:40.653255 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:41.654249 kubelet[1758]: E0909 00:20:41.654162 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:41.703999 kubelet[1758]: E0909 00:20:41.703900 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:42.655018 kubelet[1758]: E0909 00:20:42.654933 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:43.470746 containerd[1453]: time="2025-09-09T00:20:43.470691499Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:20:43.474646 systemd[1]: cri-containerd-3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369.scope: Deactivated successfully. Sep 9 00:20:43.474954 systemd[1]: cri-containerd-3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369.scope: Consumed 2.214s CPU time. Sep 9 00:20:43.499792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369-rootfs.mount: Deactivated successfully. Sep 9 00:20:43.563121 kubelet[1758]: I0909 00:20:43.563081 1758 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:20:43.655522 kubelet[1758]: E0909 00:20:43.655460 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:43.708798 systemd[1]: Created slice kubepods-besteffort-pode396a1a1_1baa_4688_8782_5ce8aaab6921.slice - libcontainer container kubepods-besteffort-pode396a1a1_1baa_4688_8782_5ce8aaab6921.slice. Sep 9 00:20:43.895392 containerd[1453]: time="2025-09-09T00:20:43.895181375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sttrq,Uid:e396a1a1-1baa-4688-8782-5ce8aaab6921,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:44.188813 containerd[1453]: time="2025-09-09T00:20:44.188615090Z" level=info msg="shim disconnected" id=3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369 namespace=k8s.io Sep 9 00:20:44.188813 containerd[1453]: time="2025-09-09T00:20:44.188690773Z" level=warning msg="cleaning up after shim disconnected" id=3f2a6f64f6c5706b68dd43b2eb0cb98e8b77b6b656c273661649eec2655de369 namespace=k8s.io Sep 9 00:20:44.188813 containerd[1453]: time="2025-09-09T00:20:44.188700896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:44.272752 containerd[1453]: time="2025-09-09T00:20:44.272654131Z" level=error msg="Failed to destroy network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:44.273292 containerd[1453]: time="2025-09-09T00:20:44.273239108Z" level=error msg="encountered an error cleaning up failed sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:44.273359 containerd[1453]: time="2025-09-09T00:20:44.273327549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sttrq,Uid:e396a1a1-1baa-4688-8782-5ce8aaab6921,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:44.273739 kubelet[1758]: E0909 00:20:44.273672 1758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:44.273950 kubelet[1758]: E0909 00:20:44.273770 1758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:44.273950 kubelet[1758]: E0909 00:20:44.273800 1758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sttrq" Sep 9 00:20:44.273950 kubelet[1758]: E0909 00:20:44.273873 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sttrq_calico-system(e396a1a1-1baa-4688-8782-5ce8aaab6921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sttrq_calico-system(e396a1a1-1baa-4688-8782-5ce8aaab6921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:44.274485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9-shm.mount: Deactivated successfully. Sep 9 00:20:44.640646 kubelet[1758]: E0909 00:20:44.640536 1758 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:44.657415 kubelet[1758]: E0909 00:20:44.657317 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:44.876221 kubelet[1758]: I0909 00:20:44.876179 1758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:20:44.877050 containerd[1453]: time="2025-09-09T00:20:44.877009584Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:20:44.877506 containerd[1453]: time="2025-09-09T00:20:44.877244972Z" level=info msg="Ensure that sandbox 1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9 in task-service has been cleanup successfully" Sep 9 00:20:44.880389 containerd[1453]: time="2025-09-09T00:20:44.880352423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:20:44.912173 containerd[1453]: time="2025-09-09T00:20:44.911975143Z" level=error msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" failed" error="failed to destroy network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:44.912371 kubelet[1758]: E0909 00:20:44.912304 1758 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:20:44.912666 kubelet[1758]: E0909 00:20:44.912386 1758 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9"} Sep 9 00:20:44.912666 kubelet[1758]: E0909 00:20:44.912453 1758 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e396a1a1-1baa-4688-8782-5ce8aaab6921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:44.912666 kubelet[1758]: E0909 00:20:44.912481 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e396a1a1-1baa-4688-8782-5ce8aaab6921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:45.658313 kubelet[1758]: E0909 00:20:45.658225 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:46.658763 kubelet[1758]: E0909 00:20:46.658683 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:47.659728 kubelet[1758]: E0909 00:20:47.659639 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:48.660379 kubelet[1758]: E0909 00:20:48.660299 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:48.854478 systemd[1]: Created slice kubepods-besteffort-pode7384407_c026_4eac_951c_aa486b39e26e.slice - libcontainer container kubepods-besteffort-pode7384407_c026_4eac_951c_aa486b39e26e.slice. Sep 9 00:20:48.953570 kubelet[1758]: I0909 00:20:48.953417 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq52\" (UniqueName: \"kubernetes.io/projected/e7384407-c026-4eac-951c-aa486b39e26e-kube-api-access-xmq52\") pod \"nginx-deployment-7fcdb87857-pczfw\" (UID: \"e7384407-c026-4eac-951c-aa486b39e26e\") " pod="default/nginx-deployment-7fcdb87857-pczfw" Sep 9 00:20:49.173067 containerd[1453]: time="2025-09-09T00:20:49.172886723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pczfw,Uid:e7384407-c026-4eac-951c-aa486b39e26e,Namespace:default,Attempt:0,}" Sep 9 00:20:49.662729 kubelet[1758]: E0909 00:20:49.662372 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:50.599582 containerd[1453]: time="2025-09-09T00:20:50.597146424Z" level=error msg="Failed to destroy network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:50.600105 containerd[1453]: time="2025-09-09T00:20:50.600046095Z" level=error msg="encountered an error cleaning up failed sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:50.600184 containerd[1453]: time="2025-09-09T00:20:50.600146566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pczfw,Uid:e7384407-c026-4eac-951c-aa486b39e26e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:50.604435 kubelet[1758]: E0909 00:20:50.600443 1758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:50.604435 kubelet[1758]: E0909 00:20:50.600533 1758 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pczfw" Sep 9 00:20:50.604435 kubelet[1758]: E0909 00:20:50.600580 1758 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pczfw" Sep 9 00:20:50.602882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55-shm.mount: Deactivated successfully. Sep 9 00:20:50.605331 kubelet[1758]: E0909 00:20:50.600648 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pczfw_default(e7384407-c026-4eac-951c-aa486b39e26e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pczfw_default(e7384407-c026-4eac-951c-aa486b39e26e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pczfw" podUID="e7384407-c026-4eac-951c-aa486b39e26e" Sep 9 00:20:50.663191 kubelet[1758]: E0909 00:20:50.663133 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:50.957874 kubelet[1758]: I0909 00:20:50.957755 1758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:20:50.958742 containerd[1453]: time="2025-09-09T00:20:50.958686504Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:20:50.959014 containerd[1453]: time="2025-09-09T00:20:50.958974291Z" level=info msg="Ensure that sandbox 541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55 in task-service has been cleanup successfully" Sep 9 00:20:51.129767 containerd[1453]: time="2025-09-09T00:20:51.128325458Z" level=error msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" failed" error="failed to destroy network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:51.130033 kubelet[1758]: E0909 00:20:51.129413 1758 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:20:51.130033 kubelet[1758]: E0909 00:20:51.129918 1758 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55"} Sep 9 00:20:51.130033 kubelet[1758]: E0909 00:20:51.130009 1758 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7384407-c026-4eac-951c-aa486b39e26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:51.130290 kubelet[1758]: E0909 00:20:51.130048 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7384407-c026-4eac-951c-aa486b39e26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pczfw" podUID="e7384407-c026-4eac-951c-aa486b39e26e" Sep 9 00:20:51.665099 kubelet[1758]: E0909 00:20:51.664727 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:52.668798 kubelet[1758]: E0909 00:20:52.666698 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:53.667147 kubelet[1758]: E0909 00:20:53.667094 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:54.668235 kubelet[1758]: E0909 00:20:54.668167 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:55.669374 kubelet[1758]: E0909 00:20:55.669282 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:56.670196 kubelet[1758]: E0909 00:20:56.670117 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:56.798706 kernel: hrtimer: interrupt took 8453379 ns Sep 9 00:20:57.723050 kubelet[1758]: E0909 00:20:57.722967 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:57.728620 containerd[1453]: time="2025-09-09T00:20:57.728508200Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:20:57.934960 containerd[1453]: time="2025-09-09T00:20:57.934786231Z" level=error msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" failed" error="failed to destroy network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:57.935442 kubelet[1758]: E0909 00:20:57.935339 1758 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:20:57.935520 kubelet[1758]: E0909 00:20:57.935460 1758 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9"} Sep 9 00:20:57.935570 kubelet[1758]: E0909 00:20:57.935520 1758 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e396a1a1-1baa-4688-8782-5ce8aaab6921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:57.935664 kubelet[1758]: E0909 00:20:57.935578 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e396a1a1-1baa-4688-8782-5ce8aaab6921\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sttrq" podUID="e396a1a1-1baa-4688-8782-5ce8aaab6921" Sep 9 00:20:58.769313 kubelet[1758]: E0909 00:20:58.769226 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:20:59.770080 kubelet[1758]: E0909 00:20:59.769988 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:00.778050 kubelet[1758]: E0909 00:21:00.775185 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:01.563233 update_engine[1439]: I20250909 00:21:01.560161 1439 update_attempter.cc:509] Updating boot flags... Sep 9 00:21:01.777505 kubelet[1758]: E0909 00:21:01.777393 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:02.236792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2437) Sep 9 00:21:02.704524 containerd[1453]: time="2025-09-09T00:21:02.704451056Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:21:02.854715 kubelet[1758]: E0909 00:21:02.854330 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:02.944737 containerd[1453]: time="2025-09-09T00:21:02.940280828Z" level=error msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" failed" error="failed to destroy network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:21:02.946235 kubelet[1758]: E0909 00:21:02.946163 1758 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:02.946327 kubelet[1758]: E0909 00:21:02.946254 1758 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55"} Sep 9 00:21:02.946327 kubelet[1758]: E0909 00:21:02.946305 1758 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7384407-c026-4eac-951c-aa486b39e26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:21:02.946466 kubelet[1758]: E0909 00:21:02.946340 1758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7384407-c026-4eac-951c-aa486b39e26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pczfw" podUID="e7384407-c026-4eac-951c-aa486b39e26e" Sep 9 00:21:03.856896 kubelet[1758]: E0909 00:21:03.856814 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:04.280615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099451596.mount: Deactivated successfully. Sep 9 00:21:04.503411 containerd[1453]: time="2025-09-09T00:21:04.501789914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:04.506355 containerd[1453]: time="2025-09-09T00:21:04.506008492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:21:04.509026 containerd[1453]: time="2025-09-09T00:21:04.508895845Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:04.517796 containerd[1453]: time="2025-09-09T00:21:04.517707602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:04.518860 containerd[1453]: time="2025-09-09T00:21:04.518652258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 19.63824776s" Sep 9 00:21:04.518860 containerd[1453]: time="2025-09-09T00:21:04.518714221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:21:04.589586 containerd[1453]: time="2025-09-09T00:21:04.589282450Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:21:04.649333 kubelet[1758]: E0909 00:21:04.644619 1758 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:04.659443 containerd[1453]: time="2025-09-09T00:21:04.656484065Z" level=info msg="CreateContainer within sandbox \"b3c10cb639cf36a60b549730067521000eb2b07ab023525ed0cd4e5a26d42bc1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e637c304833d9b0f7ae4c963e9c3f7141dd8b37874620bee7fb3144aec8150b8\"" Sep 9 00:21:04.659443 containerd[1453]: time="2025-09-09T00:21:04.657582696Z" level=info msg="StartContainer for \"e637c304833d9b0f7ae4c963e9c3f7141dd8b37874620bee7fb3144aec8150b8\"" Sep 9 00:21:04.857350 kubelet[1758]: E0909 00:21:04.856947 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:04.861899 systemd[1]: Started cri-containerd-e637c304833d9b0f7ae4c963e9c3f7141dd8b37874620bee7fb3144aec8150b8.scope - libcontainer container e637c304833d9b0f7ae4c963e9c3f7141dd8b37874620bee7fb3144aec8150b8. Sep 9 00:21:05.041256 containerd[1453]: time="2025-09-09T00:21:05.041166709Z" level=info msg="StartContainer for \"e637c304833d9b0f7ae4c963e9c3f7141dd8b37874620bee7fb3144aec8150b8\" returns successfully" Sep 9 00:21:05.176028 kubelet[1758]: I0909 00:21:05.173263 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f9rr7" podStartSLOduration=2.896620712 podStartE2EDuration="39.173238475s" podCreationTimestamp="2025-09-09 00:20:26 +0000 UTC" firstStartedPulling="2025-09-09 00:20:28.249400793 +0000 UTC m=+4.886316256" lastFinishedPulling="2025-09-09 00:21:04.526018556 +0000 UTC m=+41.162934019" observedRunningTime="2025-09-09 00:21:05.162256614 +0000 UTC m=+41.799172077" watchObservedRunningTime="2025-09-09 00:21:05.173238475 +0000 UTC m=+41.810153938" Sep 9 00:21:05.510636 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:21:05.510821 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:21:05.860685 kubelet[1758]: E0909 00:21:05.860489 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:06.861815 kubelet[1758]: E0909 00:21:06.861644 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:07.967056 kubelet[1758]: E0909 00:21:07.903586 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:08.535871 kernel: bpftool[2709]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:21:08.706610 containerd[1453]: time="2025-09-09T00:21:08.705132375Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:21:08.905928 kubelet[1758]: E0909 00:21:08.905760 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:09.054097 systemd-networkd[1376]: vxlan.calico: Link UP Sep 9 00:21:09.054111 systemd-networkd[1376]: vxlan.calico: Gained carrier Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.970 [INFO][2722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.970 [INFO][2722] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" iface="eth0" netns="/var/run/netns/cni-da63845f-b92b-ad7b-8497-96e402533e1e" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.970 [INFO][2722] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" iface="eth0" netns="/var/run/netns/cni-da63845f-b92b-ad7b-8497-96e402533e1e" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.971 [INFO][2722] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" iface="eth0" netns="/var/run/netns/cni-da63845f-b92b-ad7b-8497-96e402533e1e" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.971 [INFO][2722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:08.971 [INFO][2722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.108 [INFO][2747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.110 [INFO][2747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.110 [INFO][2747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.158 [WARNING][2747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.158 [INFO][2747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.175 [INFO][2747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:09.231012 containerd[1453]: 2025-09-09 00:21:09.205 [INFO][2722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:09.233956 systemd[1]: run-netns-cni\x2dda63845f\x2db92b\x2dad7b\x2d8497\x2d96e402533e1e.mount: Deactivated successfully. Sep 9 00:21:09.249227 containerd[1453]: time="2025-09-09T00:21:09.248845506Z" level=info msg="TearDown network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" successfully" Sep 9 00:21:09.249227 containerd[1453]: time="2025-09-09T00:21:09.248903550Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" returns successfully" Sep 9 00:21:09.252515 containerd[1453]: time="2025-09-09T00:21:09.251174228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sttrq,Uid:e396a1a1-1baa-4688-8782-5ce8aaab6921,Namespace:calico-system,Attempt:1,}" Sep 9 00:21:09.821784 systemd-networkd[1376]: cali1923d5027d8: Link UP Sep 9 00:21:09.826841 systemd-networkd[1376]: cali1923d5027d8: Gained carrier Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.457 [INFO][2779] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.26-k8s-csi--node--driver--sttrq-eth0 csi-node-driver- calico-system e396a1a1-1baa-4688-8782-5ce8aaab6921 1348 0 2025-09-09 00:20:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.26 csi-node-driver-sttrq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1923d5027d8 [] [] }} ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.457 [INFO][2779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.580 [INFO][2794] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" HandleID="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.580 [INFO][2794] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" HandleID="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000af450), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.26", "pod":"csi-node-driver-sttrq", "timestamp":"2025-09-09 00:21:09.580068605 +0000 UTC"}, Hostname:"10.0.0.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.580 [INFO][2794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.580 [INFO][2794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.584 [INFO][2794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.26' Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.609 [INFO][2794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.647 [INFO][2794] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.686 [INFO][2794] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.704 [INFO][2794] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.722 [INFO][2794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.722 [INFO][2794] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.732 [INFO][2794] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534 Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.759 [INFO][2794] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.793 [INFO][2794] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.129/26] block=192.168.103.128/26 handle="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.794 [INFO][2794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.129/26] handle="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" host="10.0.0.26" Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.794 [INFO][2794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:09.891676 containerd[1453]: 2025-09-09 00:21:09.794 [INFO][2794] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.129/26] IPv6=[] ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" HandleID="k8s-pod-network.d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.809 [INFO][2779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-csi--node--driver--sttrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e396a1a1-1baa-4688-8782-5ce8aaab6921", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"", Pod:"csi-node-driver-sttrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1923d5027d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.809 [INFO][2779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.129/32] ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.809 [INFO][2779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1923d5027d8 ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.823 [INFO][2779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.828 [INFO][2779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-csi--node--driver--sttrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e396a1a1-1baa-4688-8782-5ce8aaab6921", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534", Pod:"csi-node-driver-sttrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1923d5027d8", MAC:"fa:ba:18:b2:86:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:09.893106 containerd[1453]: 2025-09-09 00:21:09.877 [INFO][2779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534" Namespace="calico-system" Pod="csi-node-driver-sttrq" WorkloadEndpoint="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:09.907512 kubelet[1758]: E0909 00:21:09.907389 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:09.987612 containerd[1453]: time="2025-09-09T00:21:09.986987867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:09.987612 containerd[1453]: time="2025-09-09T00:21:09.987092061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:09.987612 containerd[1453]: time="2025-09-09T00:21:09.987109345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:09.987612 containerd[1453]: time="2025-09-09T00:21:09.987246485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:10.053092 systemd[1]: Started cri-containerd-d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534.scope - libcontainer container d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534. Sep 9 00:21:10.088980 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:10.130402 containerd[1453]: time="2025-09-09T00:21:10.128494852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sttrq,Uid:e396a1a1-1baa-4688-8782-5ce8aaab6921,Namespace:calico-system,Attempt:1,} returns sandbox id \"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534\"" Sep 9 00:21:10.139724 containerd[1453]: time="2025-09-09T00:21:10.139477268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:21:10.421628 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Sep 9 00:21:10.908579 kubelet[1758]: E0909 00:21:10.908384 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:11.322367 systemd-networkd[1376]: cali1923d5027d8: Gained IPv6LL Sep 9 00:21:11.908854 kubelet[1758]: E0909 00:21:11.908727 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:12.874667 containerd[1453]: time="2025-09-09T00:21:12.874570500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:12.886577 containerd[1453]: time="2025-09-09T00:21:12.884388679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:21:12.891027 containerd[1453]: time="2025-09-09T00:21:12.888273914Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:12.898283 containerd[1453]: time="2025-09-09T00:21:12.898078616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:12.899845 containerd[1453]: time="2025-09-09T00:21:12.899754919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.76023081s" Sep 9 00:21:12.899845 containerd[1453]: time="2025-09-09T00:21:12.899829816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:21:12.909313 kubelet[1758]: E0909 00:21:12.909244 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:12.913207 containerd[1453]: time="2025-09-09T00:21:12.912979899Z" level=info msg="CreateContainer within sandbox \"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:21:12.977826 containerd[1453]: time="2025-09-09T00:21:12.977724633Z" level=info msg="CreateContainer within sandbox \"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"134ade23b86732a1e4162b4ca3a9a8baa4f7a3741f9af742c56fbb1d99f85a60\"" Sep 9 00:21:12.979855 containerd[1453]: time="2025-09-09T00:21:12.978817577Z" level=info msg="StartContainer for \"134ade23b86732a1e4162b4ca3a9a8baa4f7a3741f9af742c56fbb1d99f85a60\"" Sep 9 00:21:13.083006 systemd[1]: Started cri-containerd-134ade23b86732a1e4162b4ca3a9a8baa4f7a3741f9af742c56fbb1d99f85a60.scope - libcontainer container 134ade23b86732a1e4162b4ca3a9a8baa4f7a3741f9af742c56fbb1d99f85a60. Sep 9 00:21:13.242854 containerd[1453]: time="2025-09-09T00:21:13.242738310Z" level=info msg="StartContainer for \"134ade23b86732a1e4162b4ca3a9a8baa4f7a3741f9af742c56fbb1d99f85a60\" returns successfully" Sep 9 00:21:13.247988 containerd[1453]: time="2025-09-09T00:21:13.247473160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:21:13.910296 kubelet[1758]: E0909 00:21:13.909953 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:14.914163 kubelet[1758]: E0909 00:21:14.914066 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:15.708577 containerd[1453]: time="2025-09-09T00:21:15.707701940Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:21:15.915678 kubelet[1758]: E0909 00:21:15.915603 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:15.987312 containerd[1453]: time="2025-09-09T00:21:15.986899585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:15.992919 containerd[1453]: time="2025-09-09T00:21:15.992813680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:21:15.996986 containerd[1453]: time="2025-09-09T00:21:15.996387356Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:16.004304 containerd[1453]: time="2025-09-09T00:21:16.000887232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:16.004304 containerd[1453]: time="2025-09-09T00:21:16.001894239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.754357837s" Sep 9 00:21:16.005213 containerd[1453]: time="2025-09-09T00:21:16.005160778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:21:16.019709 containerd[1453]: time="2025-09-09T00:21:16.019642349Z" level=info msg="CreateContainer within sandbox \"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.897 [INFO][2948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.897 [INFO][2948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" iface="eth0" netns="/var/run/netns/cni-28f55b0f-41c2-4f07-23f9-6724a1f05332" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.899 [INFO][2948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" iface="eth0" netns="/var/run/netns/cni-28f55b0f-41c2-4f07-23f9-6724a1f05332" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.903 [INFO][2948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" iface="eth0" netns="/var/run/netns/cni-28f55b0f-41c2-4f07-23f9-6724a1f05332" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.904 [INFO][2948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.904 [INFO][2948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.979 [INFO][2958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.981 [INFO][2958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:15.982 [INFO][2958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:16.039 [WARNING][2958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:16.040 [INFO][2958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:16.049 [INFO][2958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:16.061462 containerd[1453]: 2025-09-09 00:21:16.057 [INFO][2948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:16.062185 containerd[1453]: time="2025-09-09T00:21:16.061737370Z" level=info msg="TearDown network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" successfully" Sep 9 00:21:16.062185 containerd[1453]: time="2025-09-09T00:21:16.061781616Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" returns successfully" Sep 9 00:21:16.064822 containerd[1453]: time="2025-09-09T00:21:16.064759155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pczfw,Uid:e7384407-c026-4eac-951c-aa486b39e26e,Namespace:default,Attempt:1,}" Sep 9 00:21:16.064802 systemd[1]: run-netns-cni\x2d28f55b0f\x2d41c2\x2d4f07\x2d23f9\x2d6724a1f05332.mount: Deactivated successfully. Sep 9 00:21:16.091184 containerd[1453]: time="2025-09-09T00:21:16.091056181Z" level=info msg="CreateContainer within sandbox \"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c70cdf48a2fa091749ae81b0af64fd82e505247bf07e7af4f03ede1d8bcbc79\"" Sep 9 00:21:16.093095 containerd[1453]: time="2025-09-09T00:21:16.092944927Z" level=info msg="StartContainer for \"7c70cdf48a2fa091749ae81b0af64fd82e505247bf07e7af4f03ede1d8bcbc79\"" Sep 9 00:21:16.262475 systemd[1]: Started cri-containerd-7c70cdf48a2fa091749ae81b0af64fd82e505247bf07e7af4f03ede1d8bcbc79.scope - libcontainer container 7c70cdf48a2fa091749ae81b0af64fd82e505247bf07e7af4f03ede1d8bcbc79. Sep 9 00:21:16.923112 kubelet[1758]: E0909 00:21:16.916764 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:16.923112 kubelet[1758]: I0909 00:21:16.918582 1758 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:21:16.923112 kubelet[1758]: I0909 00:21:16.918630 1758 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:21:17.091185 containerd[1453]: time="2025-09-09T00:21:17.089923596Z" level=info msg="StartContainer for \"7c70cdf48a2fa091749ae81b0af64fd82e505247bf07e7af4f03ede1d8bcbc79\" returns successfully" Sep 9 00:21:17.399346 systemd-networkd[1376]: cali03e372d961f: Link UP Sep 9 00:21:17.402252 systemd-networkd[1376]: cali03e372d961f: Gained carrier Sep 9 00:21:17.609077 kubelet[1758]: I0909 00:21:17.608768 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sttrq" podStartSLOduration=45.740788186 podStartE2EDuration="51.608739007s" podCreationTimestamp="2025-09-09 00:20:26 +0000 UTC" firstStartedPulling="2025-09-09 00:21:10.138972309 +0000 UTC m=+46.775887782" lastFinishedPulling="2025-09-09 00:21:16.00692314 +0000 UTC m=+52.643838603" observedRunningTime="2025-09-09 00:21:17.573861199 +0000 UTC m=+54.210776682" watchObservedRunningTime="2025-09-09 00:21:17.608739007 +0000 UTC m=+54.245654470" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.324 [INFO][2973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0 nginx-deployment-7fcdb87857- default e7384407-c026-4eac-951c-aa486b39e26e 1378 0 2025-09-09 00:20:48 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.26 nginx-deployment-7fcdb87857-pczfw eth0 default [] [] [kns.default ksa.default.default] cali03e372d961f [] [] }} ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.325 [INFO][2973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.420 [INFO][3014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" HandleID="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.420 [INFO][3014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" HandleID="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139450), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.26", "pod":"nginx-deployment-7fcdb87857-pczfw", "timestamp":"2025-09-09 00:21:16.420472907 +0000 UTC"}, Hostname:"10.0.0.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.420 [INFO][3014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.420 [INFO][3014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.421 [INFO][3014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.26' Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.642 [INFO][3014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.678 [INFO][3014] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.706 [INFO][3014] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:16.714 [INFO][3014] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.148 [INFO][3014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.148 [INFO][3014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.167 [INFO][3014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.243 [INFO][3014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.382 [INFO][3014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.130/26] block=192.168.103.128/26 handle="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.382 [INFO][3014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.130/26] handle="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" host="10.0.0.26" Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.382 [INFO][3014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:17.620935 containerd[1453]: 2025-09-09 00:21:17.382 [INFO][3014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.130/26] IPv6=[] ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" HandleID="k8s-pod-network.696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.390 [INFO][2973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e7384407-c026-4eac-951c-aa486b39e26e", ResourceVersion:"1378", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-pczfw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali03e372d961f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.390 [INFO][2973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.130/32] ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.390 [INFO][2973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03e372d961f ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.402 [INFO][2973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.403 [INFO][2973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e7384407-c026-4eac-951c-aa486b39e26e", ResourceVersion:"1378", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f", Pod:"nginx-deployment-7fcdb87857-pczfw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali03e372d961f", MAC:"7a:63:70:01:f6:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:17.623333 containerd[1453]: 2025-09-09 00:21:17.609 [INFO][2973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f" Namespace="default" Pod="nginx-deployment-7fcdb87857-pczfw" WorkloadEndpoint="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:17.805921 containerd[1453]: time="2025-09-09T00:21:17.805466859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:17.807381 containerd[1453]: time="2025-09-09T00:21:17.805852116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:17.807381 containerd[1453]: time="2025-09-09T00:21:17.805899748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:17.807381 containerd[1453]: time="2025-09-09T00:21:17.806224899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:17.873493 systemd[1]: Started cri-containerd-696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f.scope - libcontainer container 696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f. Sep 9 00:21:17.917420 kubelet[1758]: E0909 00:21:17.917340 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:17.938066 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:18.013036 containerd[1453]: time="2025-09-09T00:21:18.012877152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pczfw,Uid:e7384407-c026-4eac-951c-aa486b39e26e,Namespace:default,Attempt:1,} returns sandbox id \"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f\"" Sep 9 00:21:18.014924 containerd[1453]: time="2025-09-09T00:21:18.014776931Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:21:18.918786 kubelet[1758]: E0909 00:21:18.918674 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:19.322837 systemd-networkd[1376]: cali03e372d961f: Gained IPv6LL Sep 9 00:21:19.919364 kubelet[1758]: E0909 00:21:19.919195 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:20.921351 kubelet[1758]: E0909 00:21:20.921248 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:21.923182 kubelet[1758]: E0909 00:21:21.923085 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:22.924650 kubelet[1758]: E0909 00:21:22.924576 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:23.928918 kubelet[1758]: E0909 00:21:23.928838 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:24.023233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956795662.mount: Deactivated successfully. Sep 9 00:21:24.640472 kubelet[1758]: E0909 00:21:24.640367 1758 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:24.666905 containerd[1453]: time="2025-09-09T00:21:24.666838500Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.790 [WARNING][3106] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-csi--node--driver--sttrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e396a1a1-1baa-4688-8782-5ce8aaab6921", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534", Pod:"csi-node-driver-sttrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1923d5027d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.795 [INFO][3106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.795 [INFO][3106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" iface="eth0" netns="" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.795 [INFO][3106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.795 [INFO][3106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.847 [INFO][3115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.848 [INFO][3115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.848 [INFO][3115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.872 [WARNING][3115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.872 [INFO][3115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.878 [INFO][3115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:24.888481 containerd[1453]: 2025-09-09 00:21:24.883 [INFO][3106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:24.889118 containerd[1453]: time="2025-09-09T00:21:24.888555305Z" level=info msg="TearDown network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" successfully" Sep 9 00:21:24.889118 containerd[1453]: time="2025-09-09T00:21:24.888595583Z" level=info msg="StopPodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" returns successfully" Sep 9 00:21:24.895724 containerd[1453]: time="2025-09-09T00:21:24.895554314Z" level=info msg="RemovePodSandbox for \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:21:24.895724 containerd[1453]: time="2025-09-09T00:21:24.895612134Z" level=info msg="Forcibly stopping sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\"" Sep 9 00:21:24.932067 kubelet[1758]: E0909 00:21:24.931259 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:24.986 [WARNING][3133] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-csi--node--driver--sttrq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e396a1a1-1baa-4688-8782-5ce8aaab6921", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"d50bce177112bbafbd8bae7af878989e2afa86911d85e7ad400788899d1b2534", Pod:"csi-node-driver-sttrq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1923d5027d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:24.987 [INFO][3133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:24.987 [INFO][3133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" iface="eth0" netns="" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:24.987 [INFO][3133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:24.987 [INFO][3133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.053 [INFO][3141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.054 [INFO][3141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.054 [INFO][3141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.100 [WARNING][3141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.100 [INFO][3141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" HandleID="k8s-pod-network.1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Workload="10.0.0.26-k8s-csi--node--driver--sttrq-eth0" Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.111 [INFO][3141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:25.128908 containerd[1453]: 2025-09-09 00:21:25.121 [INFO][3133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9" Sep 9 00:21:25.128908 containerd[1453]: time="2025-09-09T00:21:25.127523160Z" level=info msg="TearDown network for sandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" successfully" Sep 9 00:21:25.736818 containerd[1453]: time="2025-09-09T00:21:25.735934689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:21:25.736818 containerd[1453]: time="2025-09-09T00:21:25.736188687Z" level=info msg="RemovePodSandbox \"1a9ffa3604c4d338c83f301aec1fe3446cefd07da39021123f81c61c904946e9\" returns successfully" Sep 9 00:21:25.739140 containerd[1453]: time="2025-09-09T00:21:25.738772890Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.842 [WARNING][3161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e7384407-c026-4eac-951c-aa486b39e26e", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f", Pod:"nginx-deployment-7fcdb87857-pczfw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali03e372d961f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.842 [INFO][3161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.842 [INFO][3161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" iface="eth0" netns="" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.842 [INFO][3161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.842 [INFO][3161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.882 [INFO][3169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.882 [INFO][3169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.882 [INFO][3169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.902 [WARNING][3169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.902 [INFO][3169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.913 [INFO][3169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:25.930364 containerd[1453]: 2025-09-09 00:21:25.921 [INFO][3161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:25.930949 containerd[1453]: time="2025-09-09T00:21:25.930398977Z" level=info msg="TearDown network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" successfully" Sep 9 00:21:25.930949 containerd[1453]: time="2025-09-09T00:21:25.930442120Z" level=info msg="StopPodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" returns successfully" Sep 9 00:21:25.931738 kubelet[1758]: E0909 00:21:25.931563 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:25.931972 containerd[1453]: time="2025-09-09T00:21:25.931618484Z" level=info msg="RemovePodSandbox for \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:21:25.931972 containerd[1453]: time="2025-09-09T00:21:25.931668349Z" level=info msg="Forcibly stopping sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\"" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.066 [WARNING][3186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e7384407-c026-4eac-951c-aa486b39e26e", ResourceVersion:"1396", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f", Pod:"nginx-deployment-7fcdb87857-pczfw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali03e372d961f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.067 [INFO][3186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.067 [INFO][3186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" iface="eth0" netns="" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.067 [INFO][3186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.067 [INFO][3186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.129 [INFO][3194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.131 [INFO][3194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.131 [INFO][3194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.145 [WARNING][3194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.145 [INFO][3194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" HandleID="k8s-pod-network.541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Workload="10.0.0.26-k8s-nginx--deployment--7fcdb87857--pczfw-eth0" Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.155 [INFO][3194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:26.163286 containerd[1453]: 2025-09-09 00:21:26.159 [INFO][3186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55" Sep 9 00:21:26.163286 containerd[1453]: time="2025-09-09T00:21:26.162375243Z" level=info msg="TearDown network for sandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" successfully" Sep 9 00:21:26.182401 containerd[1453]: time="2025-09-09T00:21:26.182139429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:21:26.182401 containerd[1453]: time="2025-09-09T00:21:26.182241947Z" level=info msg="RemovePodSandbox \"541cfd4eaca6667b5bb3384d3b0807093933cd8052acc69dcd23ac262f506f55\" returns successfully" Sep 9 00:21:26.933508 kubelet[1758]: E0909 00:21:26.931756 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:27.491648 containerd[1453]: time="2025-09-09T00:21:27.488283032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:27.495974 containerd[1453]: time="2025-09-09T00:21:27.495844152Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73307810" Sep 9 00:21:27.501799 containerd[1453]: time="2025-09-09T00:21:27.501690136Z" level=info msg="ImageCreate event name:\"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:27.508489 containerd[1453]: time="2025-09-09T00:21:27.508317723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:27.510799 containerd[1453]: time="2025-09-09T00:21:27.509661916Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"73307688\" in 9.494832433s" Sep 9 00:21:27.510799 containerd[1453]: time="2025-09-09T00:21:27.509730839Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 9 00:21:27.521181 containerd[1453]: time="2025-09-09T00:21:27.520997279Z" level=info msg="CreateContainer within sandbox \"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 9 00:21:27.559137 containerd[1453]: time="2025-09-09T00:21:27.558148303Z" level=info msg="CreateContainer within sandbox \"696f1a4b9559487ddd8b3cf1e61dcbf837d9f62d45aebce5c3baf0a582fe230f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f35960fdc5bfe627ee529e6f48c029e550cc16ecbe5947a646034ce05ef9d3b1\"" Sep 9 00:21:27.564657 containerd[1453]: time="2025-09-09T00:21:27.560791913Z" level=info msg="StartContainer for \"f35960fdc5bfe627ee529e6f48c029e550cc16ecbe5947a646034ce05ef9d3b1\"" Sep 9 00:21:27.722914 systemd[1]: Started cri-containerd-f35960fdc5bfe627ee529e6f48c029e550cc16ecbe5947a646034ce05ef9d3b1.scope - libcontainer container f35960fdc5bfe627ee529e6f48c029e550cc16ecbe5947a646034ce05ef9d3b1. Sep 9 00:21:27.786788 containerd[1453]: time="2025-09-09T00:21:27.786622836Z" level=info msg="StartContainer for \"f35960fdc5bfe627ee529e6f48c029e550cc16ecbe5947a646034ce05ef9d3b1\" returns successfully" Sep 9 00:21:27.933108 kubelet[1758]: E0909 00:21:27.933012 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:28.377265 kubelet[1758]: I0909 00:21:28.377129 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-pczfw" podStartSLOduration=30.87768653 podStartE2EDuration="40.377103202s" podCreationTimestamp="2025-09-09 00:20:48 +0000 UTC" firstStartedPulling="2025-09-09 00:21:18.014095082 +0000 UTC m=+54.651010545" lastFinishedPulling="2025-09-09 00:21:27.513511754 +0000 UTC m=+64.150427217" observedRunningTime="2025-09-09 00:21:28.376931964 +0000 UTC m=+65.013847457" watchObservedRunningTime="2025-09-09 00:21:28.377103202 +0000 UTC m=+65.014018675" Sep 9 00:21:28.933718 kubelet[1758]: E0909 00:21:28.933488 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:29.935148 kubelet[1758]: E0909 00:21:29.934429 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:30.935631 kubelet[1758]: E0909 00:21:30.935412 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:31.936499 kubelet[1758]: E0909 00:21:31.936367 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:32.374533 systemd[1]: Created slice kubepods-besteffort-pod9cd9e196_3a6f_4fed_bfb0_a962a9533273.slice - libcontainer container kubepods-besteffort-pod9cd9e196_3a6f_4fed_bfb0_a962a9533273.slice. Sep 9 00:21:32.481000 kubelet[1758]: I0909 00:21:32.480849 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6mv\" (UniqueName: \"kubernetes.io/projected/9cd9e196-3a6f-4fed-bfb0-a962a9533273-kube-api-access-bs6mv\") pod \"nfs-server-provisioner-0\" (UID: \"9cd9e196-3a6f-4fed-bfb0-a962a9533273\") " pod="default/nfs-server-provisioner-0" Sep 9 00:21:32.481000 kubelet[1758]: I0909 00:21:32.480913 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9cd9e196-3a6f-4fed-bfb0-a962a9533273-data\") pod \"nfs-server-provisioner-0\" (UID: \"9cd9e196-3a6f-4fed-bfb0-a962a9533273\") " pod="default/nfs-server-provisioner-0" Sep 9 00:21:32.682619 containerd[1453]: time="2025-09-09T00:21:32.681524125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9cd9e196-3a6f-4fed-bfb0-a962a9533273,Namespace:default,Attempt:0,}" Sep 9 00:21:32.937730 kubelet[1758]: E0909 00:21:32.937322 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:33.589447 systemd-networkd[1376]: cali60e51b789ff: Link UP Sep 9 00:21:33.597938 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.268 [INFO][3294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.26-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 9cd9e196-3a6f-4fed-bfb0-a962a9533273 1464 0 2025-09-09 00:21:32 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.26 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.269 [INFO][3294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.368 [INFO][3308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" HandleID="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Workload="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.369 [INFO][3308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" HandleID="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Workload="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d6e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.26", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-09 00:21:33.368713262 +0000 UTC"}, Hostname:"10.0.0.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.370 [INFO][3308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.370 [INFO][3308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.370 [INFO][3308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.26' Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.401 [INFO][3308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.444 [INFO][3308] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.490 [INFO][3308] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.502 [INFO][3308] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.510 [INFO][3308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.511 [INFO][3308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.520 [INFO][3308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.538 [INFO][3308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.557 [INFO][3308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.131/26] block=192.168.103.128/26 handle="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.557 [INFO][3308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.131/26] handle="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" host="10.0.0.26" Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.558 [INFO][3308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:33.640216 containerd[1453]: 2025-09-09 00:21:33.558 [INFO][3308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.131/26] IPv6=[] ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" HandleID="k8s-pod-network.a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Workload="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.641779 containerd[1453]: 2025-09-09 00:21:33.574 [INFO][3294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9cd9e196-3a6f-4fed-bfb0-a962a9533273", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:33.641779 containerd[1453]: 2025-09-09 00:21:33.574 [INFO][3294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.131/32] ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.641779 containerd[1453]: 2025-09-09 00:21:33.574 [INFO][3294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.641779 containerd[1453]: 2025-09-09 00:21:33.592 [INFO][3294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.642168 containerd[1453]: 2025-09-09 00:21:33.594 [INFO][3294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"9cd9e196-3a6f-4fed-bfb0-a962a9533273", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 21, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"c2:88:f9:c8:23:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:33.642168 containerd[1453]: 2025-09-09 00:21:33.626 [INFO][3294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.26-k8s-nfs--server--provisioner--0-eth0" Sep 9 00:21:33.716571 containerd[1453]: time="2025-09-09T00:21:33.697215071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:33.716571 containerd[1453]: time="2025-09-09T00:21:33.698111528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:33.716571 containerd[1453]: time="2025-09-09T00:21:33.698129733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:33.716571 containerd[1453]: time="2025-09-09T00:21:33.699008056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:33.763961 systemd[1]: Started cri-containerd-a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a.scope - libcontainer container a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a. Sep 9 00:21:33.798802 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:33.875708 containerd[1453]: time="2025-09-09T00:21:33.875350304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9cd9e196-3a6f-4fed-bfb0-a962a9533273,Namespace:default,Attempt:0,} returns sandbox id \"a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a\"" Sep 9 00:21:33.879735 containerd[1453]: time="2025-09-09T00:21:33.879587030Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 9 00:21:33.938840 kubelet[1758]: E0909 00:21:33.938765 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:34.814947 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Sep 9 00:21:34.941231 kubelet[1758]: E0909 00:21:34.939148 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:35.941771 kubelet[1758]: E0909 00:21:35.941721 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:36.947294 kubelet[1758]: E0909 00:21:36.945148 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:37.948759 kubelet[1758]: E0909 00:21:37.948643 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:38.849482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244650999.mount: Deactivated successfully. Sep 9 00:21:38.949521 kubelet[1758]: E0909 00:21:38.949361 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:39.957511 kubelet[1758]: E0909 00:21:39.951837 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:40.954257 kubelet[1758]: E0909 00:21:40.954163 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:41.958507 kubelet[1758]: E0909 00:21:41.956575 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:42.957126 kubelet[1758]: E0909 00:21:42.957041 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:43.959972 kubelet[1758]: E0909 00:21:43.959742 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:44.640708 kubelet[1758]: E0909 00:21:44.640625 1758 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:44.960756 kubelet[1758]: E0909 00:21:44.960672 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:45.160566 containerd[1453]: time="2025-09-09T00:21:45.158943742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:45.164230 containerd[1453]: time="2025-09-09T00:21:45.164142856Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Sep 9 00:21:45.172563 containerd[1453]: time="2025-09-09T00:21:45.169082325Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:45.191635 containerd[1453]: time="2025-09-09T00:21:45.189138770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:45.197361 containerd[1453]: time="2025-09-09T00:21:45.192055426Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 11.312392391s" Sep 9 00:21:45.197361 containerd[1453]: time="2025-09-09T00:21:45.192117515Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 9 00:21:45.218143 containerd[1453]: time="2025-09-09T00:21:45.217221324Z" level=info msg="CreateContainer within sandbox \"a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 9 00:21:45.293628 containerd[1453]: time="2025-09-09T00:21:45.293498204Z" level=info msg="CreateContainer within sandbox \"a455b4ab7fc37fcf3fe95d774e0ede3f5a89cc3043645230172524e204d2b13a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"36068f8eefd51346389f261ac00e8635b24f3074151fbeab15f432bc8858591f\"" Sep 9 00:21:45.295139 containerd[1453]: time="2025-09-09T00:21:45.295103147Z" level=info msg="StartContainer for \"36068f8eefd51346389f261ac00e8635b24f3074151fbeab15f432bc8858591f\"" Sep 9 00:21:45.366767 systemd[1]: Started cri-containerd-36068f8eefd51346389f261ac00e8635b24f3074151fbeab15f432bc8858591f.scope - libcontainer container 36068f8eefd51346389f261ac00e8635b24f3074151fbeab15f432bc8858591f. Sep 9 00:21:45.447535 containerd[1453]: time="2025-09-09T00:21:45.445434032Z" level=info msg="StartContainer for \"36068f8eefd51346389f261ac00e8635b24f3074151fbeab15f432bc8858591f\" returns successfully" Sep 9 00:21:45.963597 kubelet[1758]: E0909 00:21:45.961723 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:46.964353 kubelet[1758]: E0909 00:21:46.964256 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:47.966504 kubelet[1758]: E0909 00:21:47.965729 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:48.966592 kubelet[1758]: E0909 00:21:48.966449 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:49.969826 kubelet[1758]: E0909 00:21:49.967458 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:50.974946 kubelet[1758]: E0909 00:21:50.970872 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:51.174136 kubelet[1758]: I0909 00:21:51.172769 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=7.855427311 podStartE2EDuration="19.172737104s" podCreationTimestamp="2025-09-09 00:21:32 +0000 UTC" firstStartedPulling="2025-09-09 00:21:33.877253941 +0000 UTC m=+70.514169404" lastFinishedPulling="2025-09-09 00:21:45.194563734 +0000 UTC m=+81.831479197" observedRunningTime="2025-09-09 00:21:46.508718885 +0000 UTC m=+83.145634358" watchObservedRunningTime="2025-09-09 00:21:51.172737104 +0000 UTC m=+87.809652567" Sep 9 00:21:51.238720 systemd[1]: Created slice kubepods-besteffort-pod623e3c18_c582_44a4_aad6_fd2e39c9cd33.slice - libcontainer container kubepods-besteffort-pod623e3c18_c582_44a4_aad6_fd2e39c9cd33.slice. Sep 9 00:21:51.342685 kubelet[1758]: I0909 00:21:51.342132 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw424\" (UniqueName: \"kubernetes.io/projected/623e3c18-c582-44a4-aad6-fd2e39c9cd33-kube-api-access-fw424\") pod \"test-pod-1\" (UID: \"623e3c18-c582-44a4-aad6-fd2e39c9cd33\") " pod="default/test-pod-1" Sep 9 00:21:51.342685 kubelet[1758]: I0909 00:21:51.342221 1758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d26af49-f88f-4a19-8a35-a8d87c6be5d1\" (UniqueName: \"kubernetes.io/nfs/623e3c18-c582-44a4-aad6-fd2e39c9cd33-pvc-0d26af49-f88f-4a19-8a35-a8d87c6be5d1\") pod \"test-pod-1\" (UID: \"623e3c18-c582-44a4-aad6-fd2e39c9cd33\") " pod="default/test-pod-1" Sep 9 00:21:51.599604 kernel: FS-Cache: Loaded Sep 9 00:21:51.859974 kernel: RPC: Registered named UNIX socket transport module. Sep 9 00:21:51.860143 kernel: RPC: Registered udp transport module. Sep 9 00:21:51.860179 kernel: RPC: Registered tcp transport module. Sep 9 00:21:51.861387 kernel: RPC: Registered tcp-with-tls transport module. Sep 9 00:21:51.862229 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 9 00:21:51.976215 kubelet[1758]: E0909 00:21:51.975697 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:52.578744 kernel: NFS: Registering the id_resolver key type Sep 9 00:21:52.578916 kernel: Key type id_resolver registered Sep 9 00:21:52.578943 kernel: Key type id_legacy registered Sep 9 00:21:52.676809 nfsidmap[3528]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:21:52.694312 nfsidmap[3531]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:21:52.755185 containerd[1453]: time="2025-09-09T00:21:52.755123358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:623e3c18-c582-44a4-aad6-fd2e39c9cd33,Namespace:default,Attempt:0,}" Sep 9 00:21:52.976617 kubelet[1758]: E0909 00:21:52.976525 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:53.140284 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Sep 9 00:21:53.140655 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.910 [INFO][3539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.26-k8s-test--pod--1-eth0 default 623e3c18-c582-44a4-aad6-fd2e39c9cd33 1542 0 2025-09-09 00:21:33 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.26 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.910 [INFO][3539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.960 [INFO][3549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" HandleID="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Workload="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.962 [INFO][3549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" HandleID="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Workload="10.0.0.26-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a54b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.26", "pod":"test-pod-1", "timestamp":"2025-09-09 00:21:52.960031426 +0000 UTC"}, Hostname:"10.0.0.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.964 [INFO][3549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.964 [INFO][3549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.964 [INFO][3549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.26' Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:52.989 [INFO][3549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.015 [INFO][3549] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.035 [INFO][3549] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.044 [INFO][3549] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.055 [INFO][3549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.055 [INFO][3549] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.061 [INFO][3549] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742 Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.074 [INFO][3549] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.094 [INFO][3549] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.132/26] block=192.168.103.128/26 handle="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.094 [INFO][3549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.132/26] handle="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" host="10.0.0.26" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.094 [INFO][3549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.094 [INFO][3549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.132/26] IPv6=[] ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" HandleID="k8s-pod-network.67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Workload="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.197799 containerd[1453]: 2025-09-09 00:21:53.115 [INFO][3539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"623e3c18-c582-44a4-aad6-fd2e39c9cd33", ResourceVersion:"1542", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:53.199106 containerd[1453]: 2025-09-09 00:21:53.121 [INFO][3539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.132/32] ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.199106 containerd[1453]: 2025-09-09 00:21:53.130 [INFO][3539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.199106 containerd[1453]: 2025-09-09 00:21:53.140 [INFO][3539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.199106 containerd[1453]: 2025-09-09 00:21:53.141 [INFO][3539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"623e3c18-c582-44a4-aad6-fd2e39c9cd33", ResourceVersion:"1542", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.26", ContainerID:"67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:99:f0:57:9f:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:21:53.199106 containerd[1453]: 2025-09-09 00:21:53.176 [INFO][3539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.26-k8s-test--pod--1-eth0" Sep 9 00:21:53.274368 containerd[1453]: time="2025-09-09T00:21:53.272383017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:53.274368 containerd[1453]: time="2025-09-09T00:21:53.272533423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:53.274368 containerd[1453]: time="2025-09-09T00:21:53.272587976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:53.274368 containerd[1453]: time="2025-09-09T00:21:53.274123965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:53.348998 systemd[1]: Started cri-containerd-67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742.scope - libcontainer container 67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742. Sep 9 00:21:53.388915 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:21:53.458222 containerd[1453]: time="2025-09-09T00:21:53.458079691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:623e3c18-c582-44a4-aad6-fd2e39c9cd33,Namespace:default,Attempt:0,} returns sandbox id \"67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742\"" Sep 9 00:21:53.466801 containerd[1453]: time="2025-09-09T00:21:53.466746771Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:21:53.930110 containerd[1453]: time="2025-09-09T00:21:53.930012696Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:21:53.932768 containerd[1453]: time="2025-09-09T00:21:53.932559095Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 9 00:21:53.935788 containerd[1453]: time="2025-09-09T00:21:53.935698440Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"73307688\" in 468.898829ms" Sep 9 00:21:53.935788 containerd[1453]: time="2025-09-09T00:21:53.935761260Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 9 00:21:53.960765 containerd[1453]: time="2025-09-09T00:21:53.958719448Z" level=info msg="CreateContainer within sandbox \"67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 9 00:21:53.976988 kubelet[1758]: E0909 00:21:53.976873 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:54.012629 containerd[1453]: time="2025-09-09T00:21:54.012138392Z" level=info msg="CreateContainer within sandbox \"67bbca8b56460fab40c3063b5e2615c72731423852e49b657f9c1e4359cb2742\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1\"" Sep 9 00:21:54.013336 containerd[1453]: time="2025-09-09T00:21:54.013295031Z" level=info msg="StartContainer for \"e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1\"" Sep 9 00:21:54.074486 systemd[1]: run-containerd-runc-k8s.io-e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1-runc.2NWVy7.mount: Deactivated successfully. Sep 9 00:21:54.092932 systemd[1]: Started cri-containerd-e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1.scope - libcontainer container e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1. Sep 9 00:21:54.158280 containerd[1453]: time="2025-09-09T00:21:54.157231427Z" level=info msg="StartContainer for \"e239c8878ef41d393670ce9848b83f5c7aaaed059ceb24932dc3bc8c7e29bfa1\" returns successfully" Sep 9 00:21:54.454457 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Sep 9 00:21:54.578786 kubelet[1758]: I0909 00:21:54.578684 1758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.10368428 podStartE2EDuration="21.578658054s" podCreationTimestamp="2025-09-09 00:21:33 +0000 UTC" firstStartedPulling="2025-09-09 00:21:53.466258232 +0000 UTC m=+90.103173696" lastFinishedPulling="2025-09-09 00:21:53.941232007 +0000 UTC m=+90.578147470" observedRunningTime="2025-09-09 00:21:54.577778392 +0000 UTC m=+91.214693865" watchObservedRunningTime="2025-09-09 00:21:54.578658054 +0000 UTC m=+91.215573517" Sep 9 00:21:54.979740 kubelet[1758]: E0909 00:21:54.979584 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:55.979982 kubelet[1758]: E0909 00:21:55.979827 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:56.980780 kubelet[1758]: E0909 00:21:56.980682 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:57.981821 kubelet[1758]: E0909 00:21:57.981725 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:21:58.982344 kubelet[1758]: E0909 00:21:58.982216 1758 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"