May 9 00:36:34.973311 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:36:34.973342 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:34.973354 kernel: BIOS-provided physical RAM map: May 9 00:36:34.973361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:36:34.973367 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:36:34.973373 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:36:34.973386 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:36:34.973392 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:36:34.973398 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:36:34.973405 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:36:34.973414 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:36:34.973420 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:36:34.973429 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:36:34.973436 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:36:34.973446 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:36:34.973453 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:36:34.973463 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:36:34.973470 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:36:34.973476 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:36:34.973483 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:36:34.973489 kernel: NX (Execute Disable) protection: active May 9 00:36:34.973497 kernel: APIC: Static calls initialized May 9 00:36:34.973503 kernel: efi: EFI v2.7 by EDK II May 9 00:36:34.973510 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:36:34.973517 kernel: SMBIOS 2.8 present. May 9 00:36:34.973537 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:36:34.973544 kernel: Hypervisor detected: KVM May 9 00:36:34.973554 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:36:34.973561 kernel: kvm-clock: using sched offset of 5126057399 cycles May 9 00:36:34.973568 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:36:34.973575 kernel: tsc: Detected 2794.748 MHz processor May 9 00:36:34.973582 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:36:34.973589 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:36:34.973596 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:36:34.973603 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:36:34.973610 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:36:34.973620 kernel: Using GB pages for direct mapping May 9 00:36:34.973627 kernel: Secure boot disabled May 9 00:36:34.973634 kernel: ACPI: Early table checksum verification disabled May 9 00:36:34.973641 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:36:34.973652 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:36:34.973659 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973666 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973676 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:36:34.973684 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973703 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973711 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973718 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:34.973726 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:36:34.973733 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:36:34.973743 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:36:34.973750 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:36:34.973758 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:36:34.973765 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:36:34.973772 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:36:34.973779 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:36:34.973786 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:36:34.973793 kernel: No NUMA configuration found May 9 00:36:34.973803 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:36:34.973813 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:36:34.973820 kernel: Zone ranges: May 9 00:36:34.973828 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:36:34.973835 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:36:34.973842 kernel: Normal empty May 9 00:36:34.973849 kernel: Movable zone start for each node May 9 00:36:34.973856 kernel: Early memory node ranges May 9 00:36:34.973863 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:36:34.973870 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:36:34.973878 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:36:34.973888 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:36:34.973895 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:36:34.973902 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:36:34.973911 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:36:34.973918 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:36:34.973925 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:36:34.973933 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:36:34.973940 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:36:34.973947 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:36:34.973957 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:36:34.973964 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:36:34.973972 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:36:34.973979 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:36:34.973986 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:36:34.973993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:36:34.974000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:36:34.974007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:36:34.974015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:36:34.974025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:36:34.974032 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:36:34.974039 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:36:34.974046 kernel: TSC deadline timer available May 9 00:36:34.974062 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:36:34.974069 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:36:34.974076 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:36:34.974083 kernel: kvm-guest: setup PV sched yield May 9 00:36:34.974092 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:36:34.974104 kernel: Booting paravirtualized kernel on KVM May 9 00:36:34.974112 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:36:34.974120 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:36:34.974129 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:36:34.974137 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:36:34.974146 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:36:34.974153 kernel: kvm-guest: PV spinlocks enabled May 9 00:36:34.974161 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:36:34.974176 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:34.974197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:36:34.974214 kernel: random: crng init done May 9 00:36:34.974222 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:36:34.974229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:36:34.974237 kernel: Fallback order for Node 0: 0 May 9 00:36:34.974244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:36:34.974251 kernel: Policy zone: DMA32 May 9 00:36:34.974259 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:36:34.974270 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:36:34.974277 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:36:34.974284 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:36:34.974292 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:36:34.974299 kernel: Dynamic Preempt: voluntary May 9 00:36:34.974315 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:36:34.974325 kernel: rcu: RCU event tracing is enabled. May 9 00:36:34.974333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:36:34.974341 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:36:34.974349 kernel: Rude variant of Tasks RCU enabled. May 9 00:36:34.974356 kernel: Tracing variant of Tasks RCU enabled. May 9 00:36:34.974364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:36:34.974374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:36:34.974382 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:36:34.974392 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:36:34.974400 kernel: Console: colour dummy device 80x25 May 9 00:36:34.974407 kernel: printk: console [ttyS0] enabled May 9 00:36:34.974418 kernel: ACPI: Core revision 20230628 May 9 00:36:34.974426 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:36:34.974433 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:36:34.974441 kernel: x2apic enabled May 9 00:36:34.974449 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:36:34.974456 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:36:34.974464 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:36:34.974472 kernel: kvm-guest: setup PV IPIs May 9 00:36:34.974479 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:36:34.974490 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:36:34.974497 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:36:34.974505 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:36:34.974512 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:36:34.974520 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:36:34.974542 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:36:34.974549 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:36:34.974557 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:36:34.974564 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:36:34.974576 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:36:34.974584 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:36:34.974591 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:36:34.974599 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:36:34.974614 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:36:34.974629 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:36:34.974638 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:36:34.974645 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:36:34.974657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:36:34.974664 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:36:34.974672 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:36:34.974680 kernel: Freeing SMP alternatives memory: 32K May 9 00:36:34.974695 kernel: pid_max: default: 32768 minimum: 301 May 9 00:36:34.974703 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:36:34.974711 kernel: landlock: Up and running. May 9 00:36:34.974719 kernel: SELinux: Initializing. May 9 00:36:34.974727 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:36:34.974737 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:36:34.974745 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:36:34.974752 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:34.974760 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:34.974768 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:34.974776 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:36:34.974783 kernel: ... version: 0 May 9 00:36:34.974791 kernel: ... bit width: 48 May 9 00:36:34.974798 kernel: ... generic registers: 6 May 9 00:36:34.974808 kernel: ... value mask: 0000ffffffffffff May 9 00:36:34.974816 kernel: ... max period: 00007fffffffffff May 9 00:36:34.974824 kernel: ... fixed-purpose events: 0 May 9 00:36:34.974831 kernel: ... event mask: 000000000000003f May 9 00:36:34.974839 kernel: signal: max sigframe size: 1776 May 9 00:36:34.974846 kernel: rcu: Hierarchical SRCU implementation. May 9 00:36:34.974854 kernel: rcu: Max phase no-delay instances is 400. May 9 00:36:34.974862 kernel: smp: Bringing up secondary CPUs ... May 9 00:36:34.974869 kernel: smpboot: x86: Booting SMP configuration: May 9 00:36:34.974880 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:36:34.974887 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:36:34.974895 kernel: smpboot: Max logical packages: 1 May 9 00:36:34.974905 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:36:34.974913 kernel: devtmpfs: initialized May 9 00:36:34.974920 kernel: x86/mm: Memory block size: 128MB May 9 00:36:34.974928 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:36:34.974936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:36:34.974943 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:36:34.974954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:36:34.974961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:36:34.974969 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:36:34.974977 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:36:34.974984 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:36:34.974992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:36:34.975000 kernel: audit: initializing netlink subsys (disabled) May 9 00:36:34.975007 kernel: audit: type=2000 audit(1746750993.294:1): state=initialized audit_enabled=0 res=1 May 9 00:36:34.975015 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:36:34.975025 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:36:34.975035 kernel: cpuidle: using governor menu May 9 00:36:34.975043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:36:34.975050 kernel: dca service started, version 1.12.1 May 9 00:36:34.975058 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:36:34.975066 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:36:34.975073 kernel: PCI: Using configuration type 1 for base access May 9 00:36:34.975081 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:36:34.975089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:36:34.975101 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:36:34.975109 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:36:34.975116 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:36:34.975124 kernel: ACPI: Added _OSI(Module Device) May 9 00:36:34.975131 kernel: ACPI: Added _OSI(Processor Device) May 9 00:36:34.975139 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:36:34.975147 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:36:34.975154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:36:34.975162 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:36:34.975172 kernel: ACPI: Interpreter enabled May 9 00:36:34.975179 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:36:34.975187 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:36:34.975195 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:36:34.975202 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:36:34.975210 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:36:34.975217 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:36:34.975448 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:36:34.975622 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:36:34.975770 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:36:34.975784 kernel: PCI host bridge to bus 0000:00 May 9 00:36:34.975935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:36:34.976056 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:36:34.976309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:36:34.976429 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:36:34.976566 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:36:34.976768 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:36:34.976890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:36:34.977129 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:36:34.977296 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:36:34.977425 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:36:34.977613 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:36:34.977763 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:36:34.977918 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:36:34.978052 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:36:34.978194 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:36:34.978324 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:36:34.978451 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:36:34.978605 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:36:34.978788 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:36:34.978919 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:36:34.979047 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:36:34.979175 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:36:34.979322 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:36:34.979451 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:36:34.979610 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:36:34.979792 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:36:34.979920 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:36:34.980064 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:36:34.980199 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:36:34.980333 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:36:34.980479 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:36:34.980634 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:36:34.980780 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:36:34.980919 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:36:34.980931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:36:34.980939 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:36:34.980947 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:36:34.980955 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:36:34.980968 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:36:34.980975 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:36:34.980983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:36:34.980991 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:36:34.980999 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:36:34.981006 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:36:34.981014 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:36:34.981022 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:36:34.981029 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:36:34.981040 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:36:34.981047 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:36:34.981055 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:36:34.981063 kernel: iommu: Default domain type: Translated May 9 00:36:34.981071 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:36:34.981078 kernel: efivars: Registered efivars operations May 9 00:36:34.981086 kernel: PCI: Using ACPI for IRQ routing May 9 00:36:34.981094 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:36:34.981101 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:36:34.981112 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:36:34.981123 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:36:34.981131 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:36:34.981258 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:36:34.981382 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:36:34.981508 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:36:34.981518 kernel: vgaarb: loaded May 9 00:36:34.981540 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:36:34.981548 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:36:34.981560 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:36:34.981568 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:36:34.981576 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:36:34.981584 kernel: pnp: PnP ACPI init May 9 00:36:34.981739 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:36:34.981751 kernel: pnp: PnP ACPI: found 6 devices May 9 00:36:34.981759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:36:34.981767 kernel: NET: Registered PF_INET protocol family May 9 00:36:34.981779 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:36:34.981787 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:36:34.981795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:36:34.981802 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:36:34.981810 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:36:34.981818 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:36:34.981825 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:36:34.983245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:36:34.983254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:36:34.983266 kernel: NET: Registered PF_XDP protocol family May 9 00:36:34.983405 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:36:34.983637 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:36:34.983767 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:36:34.983883 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:36:34.983995 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:36:34.984107 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:36:34.984227 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:36:34.984346 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:36:34.984356 kernel: PCI: CLS 0 bytes, default 64 May 9 00:36:34.984364 kernel: Initialise system trusted keyrings May 9 00:36:34.984372 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:36:34.984379 kernel: Key type asymmetric registered May 9 00:36:34.984387 kernel: Asymmetric key parser 'x509' registered May 9 00:36:34.984395 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:36:34.984403 kernel: io scheduler mq-deadline registered May 9 00:36:34.984411 kernel: io scheduler kyber registered May 9 00:36:34.984421 kernel: io scheduler bfq registered May 9 00:36:34.984429 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:36:34.984437 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:36:34.984445 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:36:34.984453 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:36:34.984461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:36:34.984469 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:36:34.984476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:36:34.984484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:36:34.984495 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:36:34.984673 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:36:34.984809 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:36:34.984819 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:36:34.984935 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:36:34 UTC (1746750994) May 9 00:36:34.985051 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:36:34.985062 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:36:34.985070 kernel: efifb: probing for efifb May 9 00:36:34.985084 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:36:34.985091 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:36:34.985099 kernel: efifb: scrolling: redraw May 9 00:36:34.985107 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:36:34.985115 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:36:34.985123 kernel: fb0: EFI VGA frame buffer device May 9 00:36:34.985149 kernel: pstore: Using crash dump compression: deflate May 9 00:36:34.985160 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:36:34.985167 kernel: NET: Registered PF_INET6 protocol family May 9 00:36:34.985178 kernel: Segment Routing with IPv6 May 9 00:36:34.985186 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:36:34.985194 kernel: NET: Registered PF_PACKET protocol family May 9 00:36:34.985202 kernel: Key type dns_resolver registered May 9 00:36:34.985210 kernel: IPI shorthand broadcast: enabled May 9 00:36:34.985218 kernel: sched_clock: Marking stable (1268003960, 143915519)->(1435837771, -23918292) May 9 00:36:34.985226 kernel: registered taskstats version 1 May 9 00:36:34.985234 kernel: Loading compiled-in X.509 certificates May 9 00:36:34.985242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:36:34.985253 kernel: Key type .fscrypt registered May 9 00:36:34.985261 kernel: Key type fscrypt-provisioning registered May 9 00:36:34.985269 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:36:34.985277 kernel: ima: Allocated hash algorithm: sha1 May 9 00:36:34.985285 kernel: ima: No architecture policies found May 9 00:36:34.985293 kernel: clk: Disabling unused clocks May 9 00:36:34.985301 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:36:34.985309 kernel: Write protecting the kernel read-only data: 36864k May 9 00:36:34.985320 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:36:34.985328 kernel: Run /init as init process May 9 00:36:34.985336 kernel: with arguments: May 9 00:36:34.985343 kernel: /init May 9 00:36:34.985351 kernel: with environment: May 9 00:36:34.985359 kernel: HOME=/ May 9 00:36:34.985367 kernel: TERM=linux May 9 00:36:34.985375 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:36:34.985389 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:36:34.985402 systemd[1]: Detected virtualization kvm. May 9 00:36:34.985411 systemd[1]: Detected architecture x86-64. May 9 00:36:34.985419 systemd[1]: Running in initrd. May 9 00:36:34.985427 systemd[1]: No hostname configured, using default hostname. May 9 00:36:34.985441 systemd[1]: Hostname set to . May 9 00:36:34.985450 systemd[1]: Initializing machine ID from VM UUID. May 9 00:36:34.985458 systemd[1]: Queued start job for default target initrd.target. May 9 00:36:34.985467 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:34.985476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:34.985485 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:36:34.985493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:36:34.985502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:36:34.985513 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:36:34.985570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:36:34.985581 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:36:34.985590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:34.985599 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:34.985607 systemd[1]: Reached target paths.target - Path Units. May 9 00:36:34.985616 systemd[1]: Reached target slices.target - Slice Units. May 9 00:36:34.985629 systemd[1]: Reached target swap.target - Swaps. May 9 00:36:34.985637 systemd[1]: Reached target timers.target - Timer Units. May 9 00:36:34.985646 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:36:34.985654 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:36:34.985663 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:36:34.985671 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:36:34.985680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:34.985695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:36:34.985704 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:34.985716 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:36:34.985725 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:36:34.985733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:36:34.985742 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:36:34.985750 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:36:34.985759 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:36:34.985767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:36:34.985776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:34.985787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:36:34.985796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:34.985804 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:36:34.985835 systemd-journald[192]: Collecting audit messages is disabled. May 9 00:36:34.985858 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:36:34.985866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:34.985875 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:34.985884 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:34.985895 systemd-journald[192]: Journal started May 9 00:36:34.985916 systemd-journald[192]: Runtime Journal (/run/log/journal/6e356017e5eb4da2b6062fd4fb6f7ef9) is 6.0M, max 48.3M, 42.2M free. May 9 00:36:34.968589 systemd-modules-load[195]: Inserted module 'overlay' May 9 00:36:34.987478 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:36:34.992942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:36:34.998109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:36:35.002918 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:36:35.002952 kernel: Bridge firewalling registered May 9 00:36:35.003639 systemd-modules-load[195]: Inserted module 'br_netfilter' May 9 00:36:35.006465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:36:35.008994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:35.011124 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:36:35.014157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:36:35.015397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:35.025411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:35.029678 dracut-cmdline[222]: dracut-dracut-053 May 9 00:36:35.030965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:35.033566 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:35.042751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:36:35.077136 systemd-resolved[241]: Positive Trust Anchors: May 9 00:36:35.077161 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:36:35.077203 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:36:35.080514 systemd-resolved[241]: Defaulting to hostname 'linux'. May 9 00:36:35.082252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:36:35.087307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:35.134563 kernel: SCSI subsystem initialized May 9 00:36:35.144548 kernel: Loading iSCSI transport class v2.0-870. May 9 00:36:35.155553 kernel: iscsi: registered transport (tcp) May 9 00:36:35.178760 kernel: iscsi: registered transport (qla4xxx) May 9 00:36:35.178824 kernel: QLogic iSCSI HBA Driver May 9 00:36:35.232123 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:36:35.261779 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:36:35.288048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:36:35.288126 kernel: device-mapper: uevent: version 1.0.3 May 9 00:36:35.289268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:36:35.332558 kernel: raid6: avx2x4 gen() 29768 MB/s May 9 00:36:35.349546 kernel: raid6: avx2x2 gen() 30399 MB/s May 9 00:36:35.366684 kernel: raid6: avx2x1 gen() 23472 MB/s May 9 00:36:35.366707 kernel: raid6: using algorithm avx2x2 gen() 30399 MB/s May 9 00:36:35.384669 kernel: raid6: .... xor() 19675 MB/s, rmw enabled May 9 00:36:35.384694 kernel: raid6: using avx2x2 recovery algorithm May 9 00:36:35.405575 kernel: xor: automatically using best checksumming function avx May 9 00:36:35.563588 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:36:35.576977 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:36:35.587752 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:35.602049 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 9 00:36:35.607103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:35.624796 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:36:35.641023 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 9 00:36:35.676225 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:36:35.689812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:36:35.760317 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:35.766767 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:36:35.782080 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:36:35.786033 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:36:35.791682 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:35.797039 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:36:35.793125 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:36:35.803754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:36:35.814549 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:36:35.821448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:36:35.827600 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:36:35.830545 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:36:35.830729 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:35.833587 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:35.835999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:35.844602 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:36:35.836300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:35.849693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:36:35.849714 kernel: GPT:9289727 != 19775487 May 9 00:36:35.849729 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:36:35.849743 kernel: GPT:9289727 != 19775487 May 9 00:36:35.849756 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:36:35.849769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:35.838997 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:35.851017 kernel: libata version 3.00 loaded. May 9 00:36:35.851918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:35.854210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:35.854346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:35.864688 kernel: AES CTR mode by8 optimization enabled May 9 00:36:35.864718 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:36:35.864972 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:36:35.858768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:35.868574 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:36:35.868833 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:36:35.878639 kernel: scsi host0: ahci May 9 00:36:35.879859 kernel: scsi host1: ahci May 9 00:36:35.880115 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (475) May 9 00:36:35.881053 kernel: scsi host2: ahci May 9 00:36:35.887571 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) May 9 00:36:35.889584 kernel: scsi host3: ahci May 9 00:36:35.889835 kernel: scsi host4: ahci May 9 00:36:35.890876 kernel: scsi host5: ahci May 9 00:36:35.891444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:35.899308 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:36:35.899330 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:36:35.899341 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:36:35.899351 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:36:35.899361 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:36:35.899371 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:36:35.912196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:36:35.919342 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:36:35.925235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:36:35.925573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:36:35.931547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:36:35.940782 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:36:35.942940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:35.950479 disk-uuid[557]: Primary Header is updated. May 9 00:36:35.950479 disk-uuid[557]: Secondary Entries is updated. May 9 00:36:35.950479 disk-uuid[557]: Secondary Header is updated. May 9 00:36:35.954558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:35.959565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:35.974179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:36.229565 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:36:36.229647 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:36:36.230560 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:36:36.231550 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:36:36.231566 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:36:36.232561 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:36:36.233550 kernel: ata3.00: applying bridge limits May 9 00:36:36.233564 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:36:36.234556 kernel: ata3.00: configured for UDMA/100 May 9 00:36:36.235555 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:36:36.298602 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:36:36.298876 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:36:36.312546 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:36:36.961451 disk-uuid[558]: The operation has completed successfully. May 9 00:36:36.963232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:36.994763 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:36:36.994960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:36:37.029951 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:36:37.043337 sh[594]: Success May 9 00:36:37.102846 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:36:37.192698 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:36:37.211441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:36:37.216773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:36:37.235484 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:36:37.235588 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:37.235604 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:36:37.236808 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:36:37.237612 kernel: BTRFS info (device dm-0): using free space tree May 9 00:36:37.252368 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:36:37.253776 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:36:37.263964 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:36:37.267523 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:36:37.282239 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:37.282304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:37.282316 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:37.285817 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:37.297669 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:36:37.299882 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:37.311174 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:36:37.316724 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:36:37.389244 ignition[682]: Ignition 2.19.0 May 9 00:36:37.389260 ignition[682]: Stage: fetch-offline May 9 00:36:37.389321 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:37.389337 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:37.389478 ignition[682]: parsed url from cmdline: "" May 9 00:36:37.389483 ignition[682]: no config URL provided May 9 00:36:37.389490 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:36:37.389503 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 9 00:36:37.389618 ignition[682]: op(1): [started] loading QEMU firmware config module May 9 00:36:37.389626 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:36:37.398260 ignition[682]: op(1): [finished] loading QEMU firmware config module May 9 00:36:37.398283 ignition[682]: QEMU firmware config was not found. Ignoring... May 9 00:36:37.399643 ignition[682]: parsing config with SHA512: 8951666e5d245a87755084bcc767b7b1a7f37ed6375fc481f1efbb2e12bc7b18a4e91c24a03359e658260dd71747421e1f93e6dea926ea3caee99ce452484fcd May 9 00:36:37.402101 unknown[682]: fetched base config from "system" May 9 00:36:37.402114 unknown[682]: fetched user config from "qemu" May 9 00:36:37.403116 ignition[682]: fetch-offline: fetch-offline passed May 9 00:36:37.403206 ignition[682]: Ignition finished successfully May 9 00:36:37.407595 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:36:37.425577 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:36:37.441746 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:36:37.476024 systemd-networkd[782]: lo: Link UP May 9 00:36:37.476041 systemd-networkd[782]: lo: Gained carrier May 9 00:36:37.478743 systemd-networkd[782]: Enumeration completed May 9 00:36:37.489621 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:36:37.491991 systemd[1]: Reached target network.target - Network. May 9 00:36:37.492106 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:37.492112 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:36:37.493941 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:36:37.494273 systemd-networkd[782]: eth0: Link UP May 9 00:36:37.494283 systemd-networkd[782]: eth0: Gained carrier May 9 00:36:37.494302 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:37.517542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:36:37.532681 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:36:37.554734 ignition[784]: Ignition 2.19.0 May 9 00:36:37.554760 ignition[784]: Stage: kargs May 9 00:36:37.555005 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:37.555023 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:37.555972 ignition[784]: kargs: kargs passed May 9 00:36:37.556039 ignition[784]: Ignition finished successfully May 9 00:36:37.563481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:36:37.576088 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:36:37.599075 ignition[793]: Ignition 2.19.0 May 9 00:36:37.599089 ignition[793]: Stage: disks May 9 00:36:37.599349 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:37.599364 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:37.600289 ignition[793]: disks: disks passed May 9 00:36:37.600355 ignition[793]: Ignition finished successfully May 9 00:36:37.608203 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:36:37.611089 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:36:37.611878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:36:37.614035 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:36:37.617844 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:36:37.618267 systemd[1]: Reached target basic.target - Basic System. May 9 00:36:37.632727 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:36:37.647942 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:36:37.656884 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:36:37.673764 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:36:37.765558 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:36:37.766289 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:36:37.768966 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:36:37.781679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:36:37.785017 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:36:37.787687 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:36:37.790755 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) May 9 00:36:37.790780 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:37.790761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:36:37.796769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:37.796786 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:37.796796 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:37.790801 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:36:37.801076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:36:37.805229 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:36:37.822846 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:36:37.891024 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:36:37.896411 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory May 9 00:36:37.901743 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:36:37.907041 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:36:38.035174 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:36:38.047787 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:36:38.051147 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:36:38.060574 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:38.079297 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:36:38.122859 ignition[929]: INFO : Ignition 2.19.0 May 9 00:36:38.122859 ignition[929]: INFO : Stage: mount May 9 00:36:38.124964 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:38.124964 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:38.124964 ignition[929]: INFO : mount: mount passed May 9 00:36:38.124964 ignition[929]: INFO : Ignition finished successfully May 9 00:36:38.131645 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:36:38.144754 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:36:38.234833 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:36:38.243882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:36:38.250555 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) May 9 00:36:38.250586 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:38.253086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:38.253106 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:38.255925 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:38.257048 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:36:38.306113 ignition[960]: INFO : Ignition 2.19.0 May 9 00:36:38.306113 ignition[960]: INFO : Stage: files May 9 00:36:38.307963 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:38.307963 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:38.307963 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 9 00:36:38.311767 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:36:38.311767 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:36:38.314468 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:36:38.315943 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:36:38.315943 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:36:38.315132 unknown[960]: wrote ssh authorized keys file for user: core May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:38.320058 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:36:38.699505 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 9 00:36:38.948340 systemd-networkd[782]: eth0: Gained IPv6LL May 9 00:36:39.235939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:39.235939 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 9 00:36:39.240160 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:36:39.240160 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:36:39.240160 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 9 00:36:39.240160 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:36:39.264285 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:36:39.269421 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:36:39.271049 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:36:39.271049 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:36:39.271049 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:36:39.271049 ignition[960]: INFO : files: files passed May 9 00:36:39.271049 ignition[960]: INFO : Ignition finished successfully May 9 00:36:39.279494 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:36:39.295752 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:36:39.297146 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:36:39.304601 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:36:39.305669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:36:39.307952 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:36:39.310220 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:39.311954 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:39.313510 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:39.317560 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:36:39.320313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:36:39.332922 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:36:39.373108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:36:39.373290 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:36:39.375880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:36:39.377687 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:36:39.379606 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:36:39.382728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:36:39.412550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:36:39.440930 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:36:39.462945 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:39.466212 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:39.469264 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:36:39.474550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:36:39.475930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:36:39.480756 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:36:39.483729 systemd[1]: Stopped target basic.target - Basic System. May 9 00:36:39.486314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:36:39.488818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:36:39.490304 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:36:39.496388 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:36:39.501219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:36:39.507174 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:36:39.512108 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:36:39.515643 systemd[1]: Stopped target swap.target - Swaps. May 9 00:36:39.517817 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:36:39.518997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:36:39.528208 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:39.533330 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:39.537119 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:36:39.539368 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:39.542705 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:36:39.542958 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:36:39.548055 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:36:39.549939 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:36:39.551626 systemd[1]: Stopped target paths.target - Path Units. May 9 00:36:39.556927 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:36:39.557233 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:39.560357 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:36:39.565281 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:36:39.570681 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:36:39.570890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:36:39.573447 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:36:39.573660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:36:39.579935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:36:39.580131 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:36:39.582230 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:36:39.582389 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:36:39.604934 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:36:39.629350 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:36:39.633142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:36:39.633501 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:39.640298 ignition[1014]: INFO : Ignition 2.19.0 May 9 00:36:39.640298 ignition[1014]: INFO : Stage: umount May 9 00:36:39.640298 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:39.637621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:36:39.646714 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:39.637863 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:36:39.652032 ignition[1014]: INFO : umount: umount passed May 9 00:36:39.652032 ignition[1014]: INFO : Ignition finished successfully May 9 00:36:39.656628 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:36:39.656816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:36:39.659839 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:36:39.660018 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:36:39.665678 systemd[1]: Stopped target network.target - Network. May 9 00:36:39.668328 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:36:39.668470 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:36:39.670133 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:36:39.670205 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:36:39.671178 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:36:39.671243 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:36:39.671587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:36:39.671645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:36:39.672499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:36:39.673260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:36:39.682675 systemd-networkd[782]: eth0: DHCPv6 lease lost May 9 00:36:39.686138 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:36:39.686374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:36:39.693960 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:36:39.695696 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:36:39.696597 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:36:39.700629 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:36:39.700733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:39.715043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:36:39.717721 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:36:39.717889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:36:39.720241 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:36:39.720369 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:39.723377 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:36:39.723479 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:36:39.726810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:36:39.726932 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:39.729793 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:39.756957 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:36:39.757288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:36:39.762317 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:36:39.762869 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:39.765496 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:36:39.765648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:36:39.767334 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:36:39.767451 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:39.769508 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:36:39.769817 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:36:39.772308 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:36:39.772402 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:36:39.774363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:36:39.774451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:39.788879 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:36:39.790492 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:36:39.790640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:39.793656 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:36:39.793739 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:39.796673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:36:39.796755 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:39.799825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:39.799897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:39.804002 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:36:39.804202 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:36:39.856483 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:36:39.856712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:36:39.859298 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:36:39.861035 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:36:39.861106 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:36:39.871848 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:36:39.883714 systemd[1]: Switching root. May 9 00:36:39.919267 systemd-journald[192]: Journal stopped May 9 00:36:41.287558 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 9 00:36:41.287657 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:36:41.287680 kernel: SELinux: policy capability open_perms=1 May 9 00:36:41.287695 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:36:41.287710 kernel: SELinux: policy capability always_check_network=0 May 9 00:36:41.287724 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:36:41.287739 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:36:41.287753 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:36:41.287768 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:36:41.287790 kernel: audit: type=1403 audit(1746751000.283:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:36:41.287813 systemd[1]: Successfully loaded SELinux policy in 53.316ms. May 9 00:36:41.287844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.660ms. May 9 00:36:41.287863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:36:41.287880 systemd[1]: Detected virtualization kvm. May 9 00:36:41.287906 systemd[1]: Detected architecture x86-64. May 9 00:36:41.287927 systemd[1]: Detected first boot. May 9 00:36:41.287943 systemd[1]: Initializing machine ID from VM UUID. May 9 00:36:41.287958 zram_generator::config[1058]: No configuration found. May 9 00:36:41.287976 systemd[1]: Populated /etc with preset unit settings. May 9 00:36:41.287991 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:36:41.288012 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:36:41.288028 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:36:41.288044 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:36:41.288067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:36:41.288083 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:36:41.288098 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:36:41.288114 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:36:41.288131 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:36:41.288145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:36:41.288161 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:36:41.288177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:41.288192 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:41.288215 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:36:41.288230 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:36:41.288246 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:36:41.288265 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:36:41.288281 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:36:41.288297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:41.288312 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:36:41.288328 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:36:41.288343 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:36:41.288367 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:36:41.288382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:41.288405 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:36:41.288421 systemd[1]: Reached target slices.target - Slice Units. May 9 00:36:41.288436 systemd[1]: Reached target swap.target - Swaps. May 9 00:36:41.288456 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:36:41.288472 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:36:41.288494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:41.288510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:36:41.288549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:41.288571 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:36:41.288587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:36:41.288603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:36:41.288618 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:36:41.288634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:41.288652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:36:41.288676 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:36:41.288693 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:36:41.288708 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:36:41.288723 systemd[1]: Reached target machines.target - Containers. May 9 00:36:41.288737 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:36:41.288752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:41.288767 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:36:41.288782 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:36:41.288798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:41.288821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:36:41.288837 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:41.288853 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:36:41.288868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:41.288884 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:36:41.288901 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:36:41.288917 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:36:41.288935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:36:41.288960 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:36:41.288978 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:36:41.288995 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:36:41.289012 kernel: fuse: init (API version 7.39) May 9 00:36:41.289028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:36:41.289043 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:36:41.289059 kernel: loop: module loaded May 9 00:36:41.289075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:36:41.289090 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:36:41.289114 systemd[1]: Stopped verity-setup.service. May 9 00:36:41.289130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:41.289146 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:36:41.289162 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:36:41.289177 kernel: ACPI: bus type drm_connector registered May 9 00:36:41.289200 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:36:41.289215 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:36:41.289263 systemd-journald[1128]: Collecting audit messages is disabled. May 9 00:36:41.289382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:36:41.289482 systemd-journald[1128]: Journal started May 9 00:36:41.289516 systemd-journald[1128]: Runtime Journal (/run/log/journal/6e356017e5eb4da2b6062fd4fb6f7ef9) is 6.0M, max 48.3M, 42.2M free. May 9 00:36:40.861553 systemd[1]: Queued start job for default target multi-user.target. May 9 00:36:40.893696 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:36:40.895116 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:36:41.294399 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:36:41.298141 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:36:41.301015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:41.305123 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:36:41.306099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:36:41.308400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:41.308758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:41.311849 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:36:41.312178 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:36:41.314410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:41.315069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:41.317844 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:36:41.318107 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:36:41.320269 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:41.320558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:41.323360 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:36:41.326149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:36:41.332671 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:36:41.340599 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:36:41.363973 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:36:41.383148 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:36:41.387296 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:36:41.389516 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:36:41.389599 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:36:41.393043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:36:41.458513 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:36:41.467613 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:36:41.469346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:41.475853 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:36:41.484922 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:36:41.486940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:36:41.491154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:36:41.495670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:36:41.506322 systemd-journald[1128]: Time spent on flushing to /var/log/journal/6e356017e5eb4da2b6062fd4fb6f7ef9 is 178.905ms for 976 entries. May 9 00:36:41.506322 systemd-journald[1128]: System Journal (/var/log/journal/6e356017e5eb4da2b6062fd4fb6f7ef9) is 8.0M, max 195.6M, 187.6M free. May 9 00:36:41.721292 systemd-journald[1128]: Received client request to flush runtime journal. May 9 00:36:41.721351 kernel: loop0: detected capacity change from 0 to 210664 May 9 00:36:41.506851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:36:41.517928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:36:41.529156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:36:41.535649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:41.540610 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:36:41.682738 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:36:41.686618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:36:41.698396 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:36:41.716805 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:36:41.726828 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:36:41.734815 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:36:41.737238 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:36:41.742449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:41.759607 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:36:41.885473 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:36:41.885498 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 9 00:36:41.889905 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:36:41.891700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:36:41.892520 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:36:41.896835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:41.904727 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:36:41.913594 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:36:42.028646 kernel: loop2: detected capacity change from 0 to 140768 May 9 00:36:42.058160 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:36:42.093251 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:36:42.110565 kernel: loop3: detected capacity change from 0 to 210664 May 9 00:36:42.156599 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:36:42.166890 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 9 00:36:42.166932 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 9 00:36:42.168577 kernel: loop5: detected capacity change from 0 to 140768 May 9 00:36:42.176902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:42.180767 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:36:42.181415 (sd-merge)[1198]: Merged extensions into '/usr'. May 9 00:36:42.188344 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:36:42.188366 systemd[1]: Reloading... May 9 00:36:42.262590 zram_generator::config[1227]: No configuration found. May 9 00:36:42.487024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:36:42.547753 systemd[1]: Reloading finished in 358 ms. May 9 00:36:42.550078 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:36:42.592343 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:36:42.594237 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:36:42.611884 systemd[1]: Starting ensure-sysext.service... May 9 00:36:42.614727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:36:42.621917 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... May 9 00:36:42.621940 systemd[1]: Reloading... May 9 00:36:42.727568 zram_generator::config[1289]: No configuration found. May 9 00:36:42.743617 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:36:42.744025 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:36:42.745119 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:36:42.745443 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 9 00:36:42.745558 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 9 00:36:42.751611 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:36:42.751627 systemd-tmpfiles[1263]: Skipping /boot May 9 00:36:42.771046 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:36:42.771064 systemd-tmpfiles[1263]: Skipping /boot May 9 00:36:42.880614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:36:42.938712 systemd[1]: Reloading finished in 316 ms. May 9 00:36:42.957728 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:36:42.977381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:42.986972 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:36:42.989723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:36:42.992341 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:36:42.997669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:36:42.999761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:43.011881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:36:43.015772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.015957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:43.017948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:43.023636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:43.026184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:43.028749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:43.032609 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:36:43.034318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.035390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:43.035784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:43.036753 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:43.037653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:43.042361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:43.042598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:43.051205 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:36:43.054129 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 9 00:36:43.055938 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:36:43.058616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.058929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:43.064766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:43.067927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:43.072849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:43.074065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:43.080564 augenrules[1362]: No rules May 9 00:36:43.078597 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:36:43.079819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.080852 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:36:43.082775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:43.082956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:43.084720 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:43.084918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:43.089652 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:36:43.092200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:43.092411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:43.095399 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:36:43.098969 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:36:43.100397 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:43.117694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.117852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:43.124707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:43.127211 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:36:43.130646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:43.135814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:43.144699 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:36:43.145925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:36:43.145985 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:36:43.146004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:43.146447 systemd[1]: Finished ensure-sysext.service. May 9 00:36:43.156805 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:36:43.184750 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:36:43.186889 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:36:43.188613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:36:43.191275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:43.191497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:43.200905 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:43.201117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:43.206063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:36:43.210385 systemd-resolved[1332]: Positive Trust Anchors: May 9 00:36:43.210411 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:36:43.210446 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:36:43.217588 systemd-resolved[1332]: Defaulting to hostname 'linux'. May 9 00:36:43.263135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:36:43.265246 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:43.274562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1374) May 9 00:36:43.350586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 9 00:36:43.352913 systemd-networkd[1399]: lo: Link UP May 9 00:36:43.352926 systemd-networkd[1399]: lo: Gained carrier May 9 00:36:43.354596 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:36:43.354841 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:36:43.354812 systemd-networkd[1399]: Enumeration completed May 9 00:36:43.355249 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:43.355260 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:36:43.356237 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:36:43.356465 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:36:43.357245 systemd-networkd[1399]: eth0: Link UP May 9 00:36:43.357249 systemd-networkd[1399]: eth0: Gained carrier May 9 00:36:43.357262 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:43.357457 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:36:43.358704 systemd[1]: Reached target network.target - Network. May 9 00:36:43.370575 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 00:36:43.370652 kernel: ACPI: button: Power Button [PWRF] May 9 00:36:43.370670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:36:43.372623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:36:43.372648 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:36:43.374037 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:36:43.374384 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. May 9 00:36:43.376654 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:36:43.376713 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2025-05-09 00:36:43.404668 UTC. May 9 00:36:43.384593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:36:43.393800 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:36:43.402039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:43.409557 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:36:43.415861 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:36:43.417908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:43.418153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:43.428695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:43.572292 kernel: kvm_amd: TSC scaling supported May 9 00:36:43.572350 kernel: kvm_amd: Nested Virtualization enabled May 9 00:36:43.572380 kernel: kvm_amd: Nested Paging enabled May 9 00:36:43.572393 kernel: kvm_amd: LBR virtualization supported May 9 00:36:43.573028 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:36:43.573122 kernel: kvm_amd: Virtual GIF supported May 9 00:36:43.592585 kernel: EDAC MC: Ver: 3.0.0 May 9 00:36:43.619505 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:36:43.621500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:43.634713 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:36:43.646281 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:36:43.685380 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:36:43.687930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:43.689194 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:36:43.690642 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:36:43.692168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:36:43.694071 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:36:43.695445 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:36:43.696866 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:36:43.698171 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:36:43.698202 systemd[1]: Reached target paths.target - Path Units. May 9 00:36:43.699177 systemd[1]: Reached target timers.target - Timer Units. May 9 00:36:43.701180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:36:43.704168 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:36:43.718363 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:36:43.721166 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:36:43.722887 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:36:43.724093 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:36:43.725103 systemd[1]: Reached target basic.target - Basic System. May 9 00:36:43.726109 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:36:43.726140 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:36:43.727223 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:36:43.729413 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:36:43.733651 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:36:43.734418 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:36:43.738842 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:36:43.740065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:36:43.743572 jq[1439]: false May 9 00:36:43.743980 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:36:43.748729 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:36:43.754757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:36:43.759297 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:36:43.761000 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:36:43.761554 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:36:43.764126 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:36:43.766883 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:36:43.770997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:36:43.773570 dbus-daemon[1438]: [system] SELinux support is enabled May 9 00:36:43.773994 extend-filesystems[1440]: Found loop3 May 9 00:36:43.775349 extend-filesystems[1440]: Found loop4 May 9 00:36:43.775349 extend-filesystems[1440]: Found loop5 May 9 00:36:43.775349 extend-filesystems[1440]: Found sr0 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda May 9 00:36:43.775349 extend-filesystems[1440]: Found vda1 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda2 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda3 May 9 00:36:43.775349 extend-filesystems[1440]: Found usr May 9 00:36:43.775349 extend-filesystems[1440]: Found vda4 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda6 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda7 May 9 00:36:43.775349 extend-filesystems[1440]: Found vda9 May 9 00:36:43.775349 extend-filesystems[1440]: Checking size of /dev/vda9 May 9 00:36:43.774600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:36:43.775106 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:36:43.775380 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:36:43.776309 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:36:43.794407 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:36:43.796517 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:36:43.796802 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:36:43.801724 jq[1449]: true May 9 00:36:43.802514 update_engine[1448]: I20250509 00:36:43.802415 1448 main.cc:92] Flatcar Update Engine starting May 9 00:36:43.804090 update_engine[1448]: I20250509 00:36:43.804045 1448 update_check_scheduler.cc:74] Next update check in 2m10s May 9 00:36:43.815466 systemd[1]: Started update-engine.service - Update Engine. May 9 00:36:43.816433 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:36:43.817025 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:36:43.817059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:36:43.817625 jq[1464]: true May 9 00:36:43.818429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:36:43.818446 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:36:43.821340 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:36:43.827594 extend-filesystems[1440]: Resized partition /dev/vda9 May 9 00:36:43.834196 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) May 9 00:36:43.842607 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:36:43.842635 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:36:43.843727 systemd-logind[1447]: New seat seat0. May 9 00:36:43.845563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1392) May 9 00:36:43.849047 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:36:43.898300 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:36:43.915970 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:36:43.923739 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:36:43.932782 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:36:43.941327 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:36:43.941717 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:36:43.945598 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:36:44.064925 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:36:44.070994 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:36:44.075845 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:36:44.078811 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:36:44.080330 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:36:44.087569 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:36:44.091873 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 9 00:36:44.094178 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:36:44.096407 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:36:44.109345 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:36:44.109345 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:36:44.109345 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:36:44.114070 extend-filesystems[1440]: Resized filesystem in /dev/vda9 May 9 00:36:44.122774 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:36:44.123195 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:36:44.314636 containerd[1465]: time="2025-05-09T00:36:44.314507939Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:36:44.350410 containerd[1465]: time="2025-05-09T00:36:44.350219731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.352616 containerd[1465]: time="2025-05-09T00:36:44.352561096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:44.352616 containerd[1465]: time="2025-05-09T00:36:44.352607635Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:36:44.352676 containerd[1465]: time="2025-05-09T00:36:44.352633098Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:36:44.352936 containerd[1465]: time="2025-05-09T00:36:44.352903609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:36:44.352960 containerd[1465]: time="2025-05-09T00:36:44.352934481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.353070 containerd[1465]: time="2025-05-09T00:36:44.353037716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:44.353070 containerd[1465]: time="2025-05-09T00:36:44.353063018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.353414 containerd[1465]: time="2025-05-09T00:36:44.353374146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:44.353414 containerd[1465]: time="2025-05-09T00:36:44.353403534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.353468 containerd[1465]: time="2025-05-09T00:36:44.353424198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:44.353468 containerd[1465]: time="2025-05-09T00:36:44.353442394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.353703 containerd[1465]: time="2025-05-09T00:36:44.353666768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.354034 containerd[1465]: time="2025-05-09T00:36:44.354000319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:44.354248 containerd[1465]: time="2025-05-09T00:36:44.354211074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:44.354248 containerd[1465]: time="2025-05-09T00:36:44.354239215Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:36:44.354451 containerd[1465]: time="2025-05-09T00:36:44.354417944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:36:44.354540 containerd[1465]: time="2025-05-09T00:36:44.354508643Z" level=info msg="metadata content store policy set" policy=shared May 9 00:36:44.359787 containerd[1465]: time="2025-05-09T00:36:44.359744311Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:36:44.359826 containerd[1465]: time="2025-05-09T00:36:44.359804921Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:36:44.359846 containerd[1465]: time="2025-05-09T00:36:44.359826549Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:36:44.359881 containerd[1465]: time="2025-05-09T00:36:44.359848861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:36:44.359881 containerd[1465]: time="2025-05-09T00:36:44.359868833Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:36:44.360086 containerd[1465]: time="2025-05-09T00:36:44.360049860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:36:44.360409 containerd[1465]: time="2025-05-09T00:36:44.360375542Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:36:44.360588 containerd[1465]: time="2025-05-09T00:36:44.360555906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:36:44.360623 containerd[1465]: time="2025-05-09T00:36:44.360586637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:36:44.360623 containerd[1465]: time="2025-05-09T00:36:44.360606419Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:36:44.360672 containerd[1465]: time="2025-05-09T00:36:44.360626030Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360672 containerd[1465]: time="2025-05-09T00:36:44.360645822Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360672 containerd[1465]: time="2025-05-09T00:36:44.360664822Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360735 containerd[1465]: time="2025-05-09T00:36:44.360685747Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360735 containerd[1465]: time="2025-05-09T00:36:44.360705810Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360735 containerd[1465]: time="2025-05-09T00:36:44.360723153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360788 containerd[1465]: time="2025-05-09T00:36:44.360741309Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360788 containerd[1465]: time="2025-05-09T00:36:44.360760849Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:36:44.360824 containerd[1465]: time="2025-05-09T00:36:44.360793117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:36:44.360868 containerd[1465]: time="2025-05-09T00:36:44.360834076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:36:44.360868 containerd[1465]: time="2025-05-09T00:36:44.360860431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360878005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360906810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360926671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360943001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360960524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.360979915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.361000379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.361017220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.361034021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.361052367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361113 containerd[1465]: time="2025-05-09T00:36:44.361076254Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361119813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361140598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361156766Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361223097Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361246622Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361262721Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361279994Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361295038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361314 containerd[1465]: time="2025-05-09T00:36:44.361316416Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:36:44.361472 containerd[1465]: time="2025-05-09T00:36:44.361333378Z" level=info msg="NRI interface is disabled by configuration." May 9 00:36:44.361472 containerd[1465]: time="2025-05-09T00:36:44.361348773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:36:44.361903 containerd[1465]: time="2025-05-09T00:36:44.361806443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:36:44.361903 containerd[1465]: time="2025-05-09T00:36:44.361897785Z" level=info msg="Connect containerd service" May 9 00:36:44.362174 containerd[1465]: time="2025-05-09T00:36:44.361946883Z" level=info msg="using legacy CRI server" May 9 00:36:44.362174 containerd[1465]: time="2025-05-09T00:36:44.361960743Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:36:44.362174 containerd[1465]: time="2025-05-09T00:36:44.362103532Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:36:44.363044 containerd[1465]: time="2025-05-09T00:36:44.363000185Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:36:44.363306 containerd[1465]: time="2025-05-09T00:36:44.363220966Z" level=info msg="Start subscribing containerd event" May 9 00:36:44.363346 containerd[1465]: time="2025-05-09T00:36:44.363326580Z" level=info msg="Start recovering state" May 9 00:36:44.363557 containerd[1465]: time="2025-05-09T00:36:44.363523755Z" level=info msg="Start event monitor" May 9 00:36:44.363588 containerd[1465]: time="2025-05-09T00:36:44.363548184Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:36:44.363588 containerd[1465]: time="2025-05-09T00:36:44.363572221Z" level=info msg="Start snapshots syncer" May 9 00:36:44.363588 containerd[1465]: time="2025-05-09T00:36:44.363585128Z" level=info msg="Start cni network conf syncer for default" May 9 00:36:44.363640 containerd[1465]: time="2025-05-09T00:36:44.363594582Z" level=info msg="Start streaming server" May 9 00:36:44.363659 containerd[1465]: time="2025-05-09T00:36:44.363642386Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:36:44.363883 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:36:44.365425 containerd[1465]: time="2025-05-09T00:36:44.365386785Z" level=info msg="containerd successfully booted in 0.053395s" May 9 00:36:45.347946 systemd-networkd[1399]: eth0: Gained IPv6LL May 9 00:36:45.352400 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:36:45.355586 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:36:45.369863 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:36:45.373102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:45.375856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:36:45.403328 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:36:45.430120 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:36:45.430400 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:36:45.432383 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:36:46.603244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:46.605308 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:36:46.608658 systemd[1]: Startup finished in 1.434s (kernel) + 5.528s (initrd) + 6.377s (userspace) = 13.340s. May 9 00:36:46.610780 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:36:47.858159 kubelet[1543]: E0509 00:36:47.857974 1543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:36:47.862983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:36:47.863218 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:36:47.863604 systemd[1]: kubelet.service: Consumed 2.340s CPU time. May 9 00:36:48.228365 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:36:48.230095 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:33372.service - OpenSSH per-connection server daemon (10.0.0.1:33372). May 9 00:36:48.273562 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 33372 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:48.276239 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:48.288554 systemd-logind[1447]: New session 1 of user core. May 9 00:36:48.290369 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:36:48.310958 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:36:48.325326 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:36:48.327613 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:36:48.350437 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:36:48.486120 systemd[1562]: Queued start job for default target default.target. May 9 00:36:48.496243 systemd[1562]: Created slice app.slice - User Application Slice. May 9 00:36:48.496275 systemd[1562]: Reached target paths.target - Paths. May 9 00:36:48.496290 systemd[1562]: Reached target timers.target - Timers. May 9 00:36:48.498269 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:36:48.511780 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:36:48.511925 systemd[1562]: Reached target sockets.target - Sockets. May 9 00:36:48.511944 systemd[1562]: Reached target basic.target - Basic System. May 9 00:36:48.511986 systemd[1562]: Reached target default.target - Main User Target. May 9 00:36:48.512023 systemd[1562]: Startup finished in 151ms. May 9 00:36:48.512563 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:36:48.514595 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:36:48.581248 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). May 9 00:36:48.636490 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:48.639056 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:48.644179 systemd-logind[1447]: New session 2 of user core. May 9 00:36:48.657869 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:36:48.717302 sshd[1573]: pam_unix(sshd:session): session closed for user core May 9 00:36:48.730503 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:33384.service: Deactivated successfully. May 9 00:36:48.732236 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:36:48.734004 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. May 9 00:36:48.743890 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:33392.service - OpenSSH per-connection server daemon (10.0.0.1:33392). May 9 00:36:48.745464 systemd-logind[1447]: Removed session 2. May 9 00:36:48.771374 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 33392 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:48.773318 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:48.778081 systemd-logind[1447]: New session 3 of user core. May 9 00:36:48.787698 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:36:48.838169 sshd[1580]: pam_unix(sshd:session): session closed for user core May 9 00:36:48.853616 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:33392.service: Deactivated successfully. May 9 00:36:48.855480 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:36:48.857051 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. May 9 00:36:48.867001 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). May 9 00:36:48.868095 systemd-logind[1447]: Removed session 3. May 9 00:36:48.893864 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:48.895712 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:48.900177 systemd-logind[1447]: New session 4 of user core. May 9 00:36:48.909692 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:36:48.964958 sshd[1587]: pam_unix(sshd:session): session closed for user core May 9 00:36:48.981783 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:33398.service: Deactivated successfully. May 9 00:36:48.983728 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:36:48.985418 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. May 9 00:36:48.994970 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:33406.service - OpenSSH per-connection server daemon (10.0.0.1:33406). May 9 00:36:48.996148 systemd-logind[1447]: Removed session 4. May 9 00:36:49.022223 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 33406 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:49.024187 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:49.028600 systemd-logind[1447]: New session 5 of user core. May 9 00:36:49.040727 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:36:49.106718 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:36:49.107182 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:49.129744 sudo[1597]: pam_unix(sudo:session): session closed for user root May 9 00:36:49.131966 sshd[1594]: pam_unix(sshd:session): session closed for user core May 9 00:36:49.139462 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:33406.service: Deactivated successfully. May 9 00:36:49.141340 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:36:49.142685 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. May 9 00:36:49.154849 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:33422.service - OpenSSH per-connection server daemon (10.0.0.1:33422). May 9 00:36:49.156212 systemd-logind[1447]: Removed session 5. May 9 00:36:49.183387 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 33422 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:49.185102 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:49.191115 systemd-logind[1447]: New session 6 of user core. May 9 00:36:49.200817 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:36:49.259854 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:36:49.260210 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:49.265116 sudo[1606]: pam_unix(sudo:session): session closed for user root May 9 00:36:49.273368 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:36:49.273738 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:49.294790 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:36:49.296786 auditctl[1609]: No rules May 9 00:36:49.298352 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:36:49.298704 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:36:49.300848 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:36:49.333336 augenrules[1627]: No rules May 9 00:36:49.335186 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:36:49.336517 sudo[1605]: pam_unix(sudo:session): session closed for user root May 9 00:36:49.338469 sshd[1602]: pam_unix(sshd:session): session closed for user core May 9 00:36:49.348354 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:33422.service: Deactivated successfully. May 9 00:36:49.350075 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:36:49.351416 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. May 9 00:36:49.359797 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:33424.service - OpenSSH per-connection server daemon (10.0.0.1:33424). May 9 00:36:49.360780 systemd-logind[1447]: Removed session 6. May 9 00:36:49.388770 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33424 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:49.390466 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:49.394573 systemd-logind[1447]: New session 7 of user core. May 9 00:36:49.409757 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:36:49.464307 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:36:49.464684 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:49.486879 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:36:49.507600 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:36:49.507868 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:36:50.044511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:50.044691 systemd[1]: kubelet.service: Consumed 2.340s CPU time. May 9 00:36:50.057719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:50.076501 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit session-7.scope)... May 9 00:36:50.076518 systemd[1]: Reloading... May 9 00:36:50.145576 zram_generator::config[1722]: No configuration found. May 9 00:36:50.396698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:36:50.477760 systemd[1]: Reloading finished in 400 ms. May 9 00:36:50.533077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:50.536295 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:36:50.536708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:50.539082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:50.693873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:50.711989 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:36:50.758145 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:36:50.758145 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:36:50.758145 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:36:50.760163 kubelet[1775]: I0509 00:36:50.759995 1775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:36:51.045655 kubelet[1775]: I0509 00:36:51.045459 1775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:36:51.045655 kubelet[1775]: I0509 00:36:51.045490 1775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:36:51.046559 kubelet[1775]: I0509 00:36:51.045949 1775 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:36:51.060139 kubelet[1775]: I0509 00:36:51.060085 1775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:36:51.075309 kubelet[1775]: I0509 00:36:51.075267 1775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:36:51.076683 kubelet[1775]: I0509 00:36:51.076624 1775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:36:51.076858 kubelet[1775]: I0509 00:36:51.076671 1775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:36:51.077018 kubelet[1775]: I0509 00:36:51.076870 1775 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:36:51.077018 kubelet[1775]: I0509 00:36:51.076880 1775 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:36:51.077087 kubelet[1775]: I0509 00:36:51.077034 1775 state_mem.go:36] "Initialized new in-memory state store" May 9 00:36:51.077721 kubelet[1775]: I0509 00:36:51.077692 1775 kubelet.go:400] "Attempting to sync node with API server" May 9 00:36:51.077721 kubelet[1775]: I0509 00:36:51.077711 1775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:36:51.077792 kubelet[1775]: I0509 00:36:51.077746 1775 kubelet.go:312] "Adding apiserver pod source" May 9 00:36:51.077792 kubelet[1775]: I0509 00:36:51.077768 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:36:51.077934 kubelet[1775]: E0509 00:36:51.077898 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:51.078020 kubelet[1775]: E0509 00:36:51.077985 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:51.081635 kubelet[1775]: I0509 00:36:51.081600 1775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:36:51.083027 kubelet[1775]: I0509 00:36:51.082994 1775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:36:51.083329 kubelet[1775]: W0509 00:36:51.083079 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:36:51.083329 kubelet[1775]: W0509 00:36:51.083217 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 00:36:51.083329 kubelet[1775]: W0509 00:36:51.083212 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 00:36:51.083329 kubelet[1775]: E0509 00:36:51.083270 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 9 00:36:51.083329 kubelet[1775]: E0509 00:36:51.083295 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 9 00:36:51.083953 kubelet[1775]: I0509 00:36:51.083923 1775 server.go:1264] "Started kubelet" May 9 00:36:51.084029 kubelet[1775]: I0509 00:36:51.083999 1775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:36:51.085175 kubelet[1775]: I0509 00:36:51.085125 1775 server.go:455] "Adding debug handlers to kubelet server" May 9 00:36:51.085449 kubelet[1775]: I0509 00:36:51.085423 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:36:51.086010 kubelet[1775]: I0509 00:36:51.085944 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:36:51.086224 kubelet[1775]: I0509 00:36:51.086193 1775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:36:51.089023 kubelet[1775]: E0509 00:36:51.088993 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:51.089084 kubelet[1775]: I0509 00:36:51.089059 1775 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:36:51.089369 kubelet[1775]: I0509 00:36:51.089217 1775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:36:51.089369 kubelet[1775]: I0509 00:36:51.089296 1775 reconciler.go:26] "Reconciler: start to sync state" May 9 00:36:51.090418 kubelet[1775]: E0509 00:36:51.090384 1775 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:36:51.094585 kubelet[1775]: I0509 00:36:51.093771 1775 factory.go:221] Registration of the containerd container factory successfully May 9 00:36:51.094585 kubelet[1775]: I0509 00:36:51.093792 1775 factory.go:221] Registration of the systemd container factory successfully May 9 00:36:51.094585 kubelet[1775]: I0509 00:36:51.093881 1775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:36:51.094585 kubelet[1775]: W0509 00:36:51.094014 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 00:36:51.094585 kubelet[1775]: E0509 00:36:51.094044 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 9 00:36:51.094585 kubelet[1775]: E0509 00:36:51.094186 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.112\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 9 00:36:51.113916 kubelet[1775]: I0509 00:36:51.113884 1775 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:36:51.113916 kubelet[1775]: I0509 00:36:51.113904 1775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:36:51.113916 kubelet[1775]: I0509 00:36:51.113923 1775 state_mem.go:36] "Initialized new in-memory state store" May 9 00:36:51.190370 kubelet[1775]: I0509 00:36:51.190340 1775 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.112" May 9 00:36:51.345312 kubelet[1775]: I0509 00:36:51.345151 1775 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.112" May 9 00:36:51.506165 kubelet[1775]: E0509 00:36:51.506115 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:51.539774 kubelet[1775]: I0509 00:36:51.539704 1775 policy_none.go:49] "None policy: Start" May 9 00:36:51.540914 kubelet[1775]: I0509 00:36:51.540880 1775 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:36:51.541067 kubelet[1775]: I0509 00:36:51.540946 1775 state_mem.go:35] "Initializing new in-memory state store" May 9 00:36:51.551075 sudo[1638]: pam_unix(sudo:session): session closed for user root May 9 00:36:51.551963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:36:51.553002 sshd[1635]: pam_unix(sshd:session): session closed for user core May 9 00:36:51.559450 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:33424.service: Deactivated successfully. May 9 00:36:51.561474 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:36:51.563469 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. May 9 00:36:51.565894 systemd-logind[1447]: Removed session 7. May 9 00:36:51.569249 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:36:51.573625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:36:51.576218 kubelet[1775]: I0509 00:36:51.576172 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:36:51.578223 kubelet[1775]: I0509 00:36:51.578014 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:36:51.578223 kubelet[1775]: I0509 00:36:51.578058 1775 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:36:51.578223 kubelet[1775]: I0509 00:36:51.578084 1775 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:36:51.578223 kubelet[1775]: E0509 00:36:51.578133 1775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:36:51.583432 kubelet[1775]: I0509 00:36:51.583384 1775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:36:51.584220 kubelet[1775]: I0509 00:36:51.583740 1775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:36:51.584220 kubelet[1775]: I0509 00:36:51.583894 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:36:51.585334 kubelet[1775]: E0509 00:36:51.585302 1775 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.112\" not found" May 9 00:36:51.606713 kubelet[1775]: E0509 00:36:51.606489 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:51.707630 kubelet[1775]: E0509 00:36:51.707493 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:51.808192 kubelet[1775]: E0509 00:36:51.808119 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:51.909120 kubelet[1775]: E0509 00:36:51.908894 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:52.009628 kubelet[1775]: E0509 00:36:52.009567 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:52.048926 kubelet[1775]: I0509 00:36:52.048838 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 9 00:36:52.049214 kubelet[1775]: W0509 00:36:52.049106 1775 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 9 00:36:52.078301 kubelet[1775]: E0509 00:36:52.078263 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:52.110628 kubelet[1775]: E0509 00:36:52.110594 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:52.211707 kubelet[1775]: E0509 00:36:52.211561 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:52.312285 kubelet[1775]: E0509 00:36:52.312227 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" May 9 00:36:52.414108 kubelet[1775]: I0509 00:36:52.414063 1775 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 9 00:36:52.414464 containerd[1465]: time="2025-05-09T00:36:52.414386615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:36:52.414922 kubelet[1775]: I0509 00:36:52.414646 1775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 9 00:36:53.079042 kubelet[1775]: E0509 00:36:53.078945 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:53.080204 kubelet[1775]: I0509 00:36:53.080148 1775 apiserver.go:52] "Watching apiserver" May 9 00:36:53.084363 kubelet[1775]: I0509 00:36:53.084316 1775 topology_manager.go:215] "Topology Admit Handler" podUID="8630e8e5-cb6e-43e2-badd-ec073b2292b7" podNamespace="kube-system" podName="kube-proxy-q9hxf" May 9 00:36:53.084436 kubelet[1775]: I0509 00:36:53.084423 1775 topology_manager.go:215] "Topology Admit Handler" podUID="fa0db9f9-defb-4cfb-83d4-3c83f783e892" podNamespace="calico-system" podName="calico-node-rrtwk" May 9 00:36:53.084562 kubelet[1775]: I0509 00:36:53.084513 1775 topology_manager.go:215] "Topology Admit Handler" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" podNamespace="calico-system" podName="csi-node-driver-v8dpj" May 9 00:36:53.085054 kubelet[1775]: E0509 00:36:53.084951 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:36:53.089799 kubelet[1775]: I0509 00:36:53.089765 1775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:36:53.093475 systemd[1]: Created slice kubepods-besteffort-pod8630e8e5_cb6e_43e2_badd_ec073b2292b7.slice - libcontainer container kubepods-besteffort-pod8630e8e5_cb6e_43e2_badd_ec073b2292b7.slice. May 9 00:36:53.099871 kubelet[1775]: I0509 00:36:53.099814 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-xtables-lock\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.099934 kubelet[1775]: I0509 00:36:53.099884 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa0db9f9-defb-4cfb-83d4-3c83f783e892-node-certs\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.099934 kubelet[1775]: I0509 00:36:53.099920 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-cni-log-dir\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100024 kubelet[1775]: I0509 00:36:53.099949 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-flexvol-driver-host\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100024 kubelet[1775]: I0509 00:36:53.099979 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qkq\" (UniqueName: \"kubernetes.io/projected/fa0db9f9-defb-4cfb-83d4-3c83f783e892-kube-api-access-49qkq\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100024 kubelet[1775]: I0509 00:36:53.100005 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054e0ea9-c254-4f90-a1c5-22ee92a19ac0-kubelet-dir\") pod \"csi-node-driver-v8dpj\" (UID: \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\") " pod="calico-system/csi-node-driver-v8dpj" May 9 00:36:53.100096 kubelet[1775]: I0509 00:36:53.100032 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8630e8e5-cb6e-43e2-badd-ec073b2292b7-xtables-lock\") pod \"kube-proxy-q9hxf\" (UID: \"8630e8e5-cb6e-43e2-badd-ec073b2292b7\") " pod="kube-system/kube-proxy-q9hxf" May 9 00:36:53.100096 kubelet[1775]: I0509 00:36:53.100057 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-lib-modules\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100096 kubelet[1775]: I0509 00:36:53.100083 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-var-run-calico\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100172 kubelet[1775]: I0509 00:36:53.100108 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8630e8e5-cb6e-43e2-badd-ec073b2292b7-kube-proxy\") pod \"kube-proxy-q9hxf\" (UID: \"8630e8e5-cb6e-43e2-badd-ec073b2292b7\") " pod="kube-system/kube-proxy-q9hxf" May 9 00:36:53.100172 kubelet[1775]: I0509 00:36:53.100134 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-policysync\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100172 kubelet[1775]: I0509 00:36:53.100160 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-cni-bin-dir\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100250 kubelet[1775]: I0509 00:36:53.100185 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/054e0ea9-c254-4f90-a1c5-22ee92a19ac0-socket-dir\") pod \"csi-node-driver-v8dpj\" (UID: \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\") " pod="calico-system/csi-node-driver-v8dpj" May 9 00:36:53.100250 kubelet[1775]: I0509 00:36:53.100213 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqt5r\" (UniqueName: \"kubernetes.io/projected/054e0ea9-c254-4f90-a1c5-22ee92a19ac0-kube-api-access-xqt5r\") pod \"csi-node-driver-v8dpj\" (UID: \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\") " pod="calico-system/csi-node-driver-v8dpj" May 9 00:36:53.100250 kubelet[1775]: I0509 00:36:53.100240 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv949\" (UniqueName: \"kubernetes.io/projected/8630e8e5-cb6e-43e2-badd-ec073b2292b7-kube-api-access-jv949\") pod \"kube-proxy-q9hxf\" (UID: \"8630e8e5-cb6e-43e2-badd-ec073b2292b7\") " pod="kube-system/kube-proxy-q9hxf" May 9 00:36:53.100316 kubelet[1775]: I0509 00:36:53.100265 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-var-lib-calico\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100316 kubelet[1775]: I0509 00:36:53.100291 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa0db9f9-defb-4cfb-83d4-3c83f783e892-cni-net-dir\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.100370 kubelet[1775]: I0509 00:36:53.100316 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/054e0ea9-c254-4f90-a1c5-22ee92a19ac0-varrun\") pod \"csi-node-driver-v8dpj\" (UID: \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\") " pod="calico-system/csi-node-driver-v8dpj" May 9 00:36:53.100370 kubelet[1775]: I0509 00:36:53.100341 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/054e0ea9-c254-4f90-a1c5-22ee92a19ac0-registration-dir\") pod \"csi-node-driver-v8dpj\" (UID: \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\") " pod="calico-system/csi-node-driver-v8dpj" May 9 00:36:53.100370 kubelet[1775]: I0509 00:36:53.100364 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8630e8e5-cb6e-43e2-badd-ec073b2292b7-lib-modules\") pod \"kube-proxy-q9hxf\" (UID: \"8630e8e5-cb6e-43e2-badd-ec073b2292b7\") " pod="kube-system/kube-proxy-q9hxf" May 9 00:36:53.100461 kubelet[1775]: I0509 00:36:53.100418 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa0db9f9-defb-4cfb-83d4-3c83f783e892-tigera-ca-bundle\") pod \"calico-node-rrtwk\" (UID: \"fa0db9f9-defb-4cfb-83d4-3c83f783e892\") " pod="calico-system/calico-node-rrtwk" May 9 00:36:53.105800 systemd[1]: Created slice kubepods-besteffort-podfa0db9f9_defb_4cfb_83d4_3c83f783e892.slice - libcontainer container kubepods-besteffort-podfa0db9f9_defb_4cfb_83d4_3c83f783e892.slice. May 9 00:36:53.205874 kubelet[1775]: E0509 00:36:53.205817 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:36:53.205874 kubelet[1775]: W0509 00:36:53.205862 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:36:53.206044 kubelet[1775]: E0509 00:36:53.205897 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:36:53.255463 kubelet[1775]: E0509 00:36:53.255423 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:36:53.255463 kubelet[1775]: W0509 00:36:53.255448 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:36:53.255677 kubelet[1775]: E0509 00:36:53.255484 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:36:53.255748 kubelet[1775]: E0509 00:36:53.255735 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:36:53.255748 kubelet[1775]: W0509 00:36:53.255746 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:36:53.255798 kubelet[1775]: E0509 00:36:53.255755 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:36:53.257222 kubelet[1775]: E0509 00:36:53.257207 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:36:53.257222 kubelet[1775]: W0509 00:36:53.257220 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:36:53.257299 kubelet[1775]: E0509 00:36:53.257232 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:36:53.403150 kubelet[1775]: E0509 00:36:53.402969 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:53.404431 containerd[1465]: time="2025-05-09T00:36:53.404370389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q9hxf,Uid:8630e8e5-cb6e-43e2-badd-ec073b2292b7,Namespace:kube-system,Attempt:0,}" May 9 00:36:53.409002 kubelet[1775]: E0509 00:36:53.408947 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:53.409635 containerd[1465]: time="2025-05-09T00:36:53.409580003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rrtwk,Uid:fa0db9f9-defb-4cfb-83d4-3c83f783e892,Namespace:calico-system,Attempt:0,}" May 9 00:36:54.080115 kubelet[1775]: E0509 00:36:54.080049 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:54.122778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342938744.mount: Deactivated successfully. May 9 00:36:54.131799 containerd[1465]: time="2025-05-09T00:36:54.131743412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:36:54.132710 containerd[1465]: time="2025-05-09T00:36:54.132679870Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:36:54.133586 containerd[1465]: time="2025-05-09T00:36:54.133512528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:36:54.134460 containerd[1465]: time="2025-05-09T00:36:54.134433592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:36:54.135286 containerd[1465]: time="2025-05-09T00:36:54.135255851Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:36:54.137971 containerd[1465]: time="2025-05-09T00:36:54.137939064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:36:54.140060 containerd[1465]: time="2025-05-09T00:36:54.139149999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.384674ms" May 9 00:36:54.142128 containerd[1465]: time="2025-05-09T00:36:54.142073522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 732.367812ms" May 9 00:36:54.462069 containerd[1465]: time="2025-05-09T00:36:54.461355914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:36:54.462069 containerd[1465]: time="2025-05-09T00:36:54.461415279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:36:54.462069 containerd[1465]: time="2025-05-09T00:36:54.461425538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:36:54.462069 containerd[1465]: time="2025-05-09T00:36:54.461517947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:36:54.464452 containerd[1465]: time="2025-05-09T00:36:54.462457934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:36:54.464452 containerd[1465]: time="2025-05-09T00:36:54.462514814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:36:54.464452 containerd[1465]: time="2025-05-09T00:36:54.462561224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:36:54.464452 containerd[1465]: time="2025-05-09T00:36:54.462665295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:36:54.578779 systemd[1]: Started cri-containerd-03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73.scope - libcontainer container 03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73. May 9 00:36:54.580061 kubelet[1775]: E0509 00:36:54.580007 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:36:54.585944 systemd[1]: Started cri-containerd-c81c2c7505e7a01db05a0035177fa441ac60b5887f6a9e6dfd7619c23a46ab34.scope - libcontainer container c81c2c7505e7a01db05a0035177fa441ac60b5887f6a9e6dfd7619c23a46ab34. May 9 00:36:54.657973 containerd[1465]: time="2025-05-09T00:36:54.657900097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rrtwk,Uid:fa0db9f9-defb-4cfb-83d4-3c83f783e892,Namespace:calico-system,Attempt:0,} returns sandbox id \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\"" May 9 00:36:54.659597 kubelet[1775]: E0509 00:36:54.659569 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:54.662348 containerd[1465]: time="2025-05-09T00:36:54.662293390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 00:36:54.678898 containerd[1465]: time="2025-05-09T00:36:54.678845656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q9hxf,Uid:8630e8e5-cb6e-43e2-badd-ec073b2292b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81c2c7505e7a01db05a0035177fa441ac60b5887f6a9e6dfd7619c23a46ab34\"" May 9 00:36:54.679736 kubelet[1775]: E0509 00:36:54.679710 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:55.081031 kubelet[1775]: E0509 00:36:55.080958 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:56.082102 kubelet[1775]: E0509 00:36:56.082023 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:56.143173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304186539.mount: Deactivated successfully. May 9 00:36:56.258830 containerd[1465]: time="2025-05-09T00:36:56.258743944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.259484 containerd[1465]: time="2025-05-09T00:36:56.259415437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6859697" May 9 00:36:56.260648 containerd[1465]: time="2025-05-09T00:36:56.260608008Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.262834 containerd[1465]: time="2025-05-09T00:36:56.262789805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.263588 containerd[1465]: time="2025-05-09T00:36:56.263514048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.601171794s" May 9 00:36:56.263640 containerd[1465]: time="2025-05-09T00:36:56.263588057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 9 00:36:56.265038 containerd[1465]: time="2025-05-09T00:36:56.264625081Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:36:56.266363 containerd[1465]: time="2025-05-09T00:36:56.266330038Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 00:36:56.282098 containerd[1465]: time="2025-05-09T00:36:56.282055803Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03\"" May 9 00:36:56.282794 containerd[1465]: time="2025-05-09T00:36:56.282769328Z" level=info msg="StartContainer for \"df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03\"" May 9 00:36:56.422672 systemd[1]: Started cri-containerd-df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03.scope - libcontainer container df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03. May 9 00:36:56.461309 containerd[1465]: time="2025-05-09T00:36:56.461262375Z" level=info msg="StartContainer for \"df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03\" returns successfully" May 9 00:36:56.478796 systemd[1]: cri-containerd-df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03.scope: Deactivated successfully. May 9 00:36:56.561153 containerd[1465]: time="2025-05-09T00:36:56.561055875Z" level=info msg="shim disconnected" id=df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03 namespace=k8s.io May 9 00:36:56.561153 containerd[1465]: time="2025-05-09T00:36:56.561141024Z" level=warning msg="cleaning up after shim disconnected" id=df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03 namespace=k8s.io May 9 00:36:56.561153 containerd[1465]: time="2025-05-09T00:36:56.561157227Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:36:56.578991 kubelet[1775]: E0509 00:36:56.578942 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:36:56.591288 kubelet[1775]: E0509 00:36:56.591252 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:57.097382 kubelet[1775]: E0509 00:36:57.090120 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:57.125737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df57a185dc8b927b42af9866eb5e16734a593a6a1f543e95ea50705012206d03-rootfs.mount: Deactivated successfully. May 9 00:36:58.090737 kubelet[1775]: E0509 00:36:58.090668 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:58.580649 kubelet[1775]: E0509 00:36:58.579159 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:36:59.093651 kubelet[1775]: E0509 00:36:59.093568 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:36:59.315231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221324233.mount: Deactivated successfully. May 9 00:37:00.098587 kubelet[1775]: E0509 00:37:00.098449 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:00.579972 kubelet[1775]: E0509 00:37:00.579406 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:01.099330 kubelet[1775]: E0509 00:37:01.099187 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:01.889029 containerd[1465]: time="2025-05-09T00:37:01.888843129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:01.890138 containerd[1465]: time="2025-05-09T00:37:01.890052595Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 9 00:37:01.892587 containerd[1465]: time="2025-05-09T00:37:01.892414811Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:01.896980 containerd[1465]: time="2025-05-09T00:37:01.896888744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:01.898999 containerd[1465]: time="2025-05-09T00:37:01.898918900Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 5.634255771s" May 9 00:37:01.898999 containerd[1465]: time="2025-05-09T00:37:01.898960102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:37:01.902724 containerd[1465]: time="2025-05-09T00:37:01.902650205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 00:37:01.905395 containerd[1465]: time="2025-05-09T00:37:01.905222968Z" level=info msg="CreateContainer within sandbox \"c81c2c7505e7a01db05a0035177fa441ac60b5887f6a9e6dfd7619c23a46ab34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:37:01.947823 containerd[1465]: time="2025-05-09T00:37:01.947726943Z" level=info msg="CreateContainer within sandbox \"c81c2c7505e7a01db05a0035177fa441ac60b5887f6a9e6dfd7619c23a46ab34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8179102b27b9c0f4ee02d52ef63aba0ecb5ea7338df2a54224bb3f735fd44db\"" May 9 00:37:01.949609 containerd[1465]: time="2025-05-09T00:37:01.948795995Z" level=info msg="StartContainer for \"c8179102b27b9c0f4ee02d52ef63aba0ecb5ea7338df2a54224bb3f735fd44db\"" May 9 00:37:02.101094 kubelet[1775]: E0509 00:37:02.100229 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:02.153907 systemd[1]: Started cri-containerd-c8179102b27b9c0f4ee02d52ef63aba0ecb5ea7338df2a54224bb3f735fd44db.scope - libcontainer container c8179102b27b9c0f4ee02d52ef63aba0ecb5ea7338df2a54224bb3f735fd44db. May 9 00:37:02.247049 containerd[1465]: time="2025-05-09T00:37:02.246943286Z" level=info msg="StartContainer for \"c8179102b27b9c0f4ee02d52ef63aba0ecb5ea7338df2a54224bb3f735fd44db\" returns successfully" May 9 00:37:02.579568 kubelet[1775]: E0509 00:37:02.579295 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:02.741132 kubelet[1775]: E0509 00:37:02.741050 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:03.100758 kubelet[1775]: E0509 00:37:03.100643 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:03.745938 kubelet[1775]: E0509 00:37:03.745008 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:04.118496 kubelet[1775]: E0509 00:37:04.111852 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:04.579599 kubelet[1775]: E0509 00:37:04.579238 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:05.112249 kubelet[1775]: E0509 00:37:05.112153 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:06.114058 kubelet[1775]: E0509 00:37:06.112456 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:06.579165 kubelet[1775]: E0509 00:37:06.578917 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:07.281658 kubelet[1775]: E0509 00:37:07.281575 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:08.282207 kubelet[1775]: E0509 00:37:08.282114 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:08.579116 kubelet[1775]: E0509 00:37:08.578593 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:09.285284 kubelet[1775]: E0509 00:37:09.285161 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:10.289554 kubelet[1775]: E0509 00:37:10.289370 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:10.579815 kubelet[1775]: E0509 00:37:10.579557 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:11.081712 kubelet[1775]: E0509 00:37:11.078558 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:11.290214 kubelet[1775]: E0509 00:37:11.290086 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:12.290861 kubelet[1775]: E0509 00:37:12.290777 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:12.580280 kubelet[1775]: E0509 00:37:12.579498 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:13.292237 kubelet[1775]: E0509 00:37:13.291986 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:14.292488 kubelet[1775]: E0509 00:37:14.292401 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:14.542092 containerd[1465]: time="2025-05-09T00:37:14.541990372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:14.543079 containerd[1465]: time="2025-05-09T00:37:14.542822503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 9 00:37:14.544696 containerd[1465]: time="2025-05-09T00:37:14.544645585Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:14.547045 containerd[1465]: time="2025-05-09T00:37:14.547005309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:14.548146 containerd[1465]: time="2025-05-09T00:37:14.548092594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 12.645391397s" May 9 00:37:14.548146 containerd[1465]: time="2025-05-09T00:37:14.548137609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 9 00:37:14.550658 containerd[1465]: time="2025-05-09T00:37:14.550598900Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:37:14.573062 containerd[1465]: time="2025-05-09T00:37:14.572970518Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e\"" May 9 00:37:14.573779 containerd[1465]: time="2025-05-09T00:37:14.573731438Z" level=info msg="StartContainer for \"c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e\"" May 9 00:37:14.578449 kubelet[1775]: E0509 00:37:14.578351 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:14.735769 systemd[1]: Started cri-containerd-c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e.scope - libcontainer container c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e. May 9 00:37:14.853724 containerd[1465]: time="2025-05-09T00:37:14.853506085Z" level=info msg="StartContainer for \"c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e\" returns successfully" May 9 00:37:15.293644 kubelet[1775]: E0509 00:37:15.293358 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:15.815817 kubelet[1775]: E0509 00:37:15.815739 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:15.867347 kubelet[1775]: I0509 00:37:15.867229 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q9hxf" podStartSLOduration=17.646729536 podStartE2EDuration="24.86719607s" podCreationTimestamp="2025-05-09 00:36:51 +0000 UTC" firstStartedPulling="2025-05-09 00:36:54.680695729 +0000 UTC m=+3.963627168" lastFinishedPulling="2025-05-09 00:37:01.901162263 +0000 UTC m=+11.184093702" observedRunningTime="2025-05-09 00:37:02.777362849 +0000 UTC m=+12.060294289" watchObservedRunningTime="2025-05-09 00:37:15.86719607 +0000 UTC m=+25.150127519" May 9 00:37:16.294778 kubelet[1775]: E0509 00:37:16.294593 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:16.542085 systemd[1]: cri-containerd-c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e.scope: Deactivated successfully. May 9 00:37:16.542427 systemd[1]: cri-containerd-c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e.scope: Consumed 1.225s CPU time. May 9 00:37:16.567571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e-rootfs.mount: Deactivated successfully. May 9 00:37:16.578775 kubelet[1775]: E0509 00:37:16.578699 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:16.627292 kubelet[1775]: I0509 00:37:16.627243 1775 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:37:16.817093 kubelet[1775]: E0509 00:37:16.817059 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:16.859509 containerd[1465]: time="2025-05-09T00:37:16.859308561Z" level=info msg="shim disconnected" id=c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e namespace=k8s.io May 9 00:37:16.859509 containerd[1465]: time="2025-05-09T00:37:16.859411337Z" level=warning msg="cleaning up after shim disconnected" id=c054b2aef9899bd408dc3bdd5a95c8a96a78ba00679b226be40ab14516bb240e namespace=k8s.io May 9 00:37:16.859509 containerd[1465]: time="2025-05-09T00:37:16.859425326Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:37:17.296223 kubelet[1775]: E0509 00:37:17.296030 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:17.820147 kubelet[1775]: E0509 00:37:17.820103 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:17.821033 containerd[1465]: time="2025-05-09T00:37:17.820983991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 00:37:18.296831 kubelet[1775]: E0509 00:37:18.296640 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:18.585865 systemd[1]: Created slice kubepods-besteffort-pod054e0ea9_c254_4f90_a1c5_22ee92a19ac0.slice - libcontainer container kubepods-besteffort-pod054e0ea9_c254_4f90_a1c5_22ee92a19ac0.slice. May 9 00:37:18.589348 containerd[1465]: time="2025-05-09T00:37:18.589296368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8dpj,Uid:054e0ea9-c254-4f90-a1c5-22ee92a19ac0,Namespace:calico-system,Attempt:0,}" May 9 00:37:18.923394 containerd[1465]: time="2025-05-09T00:37:18.923245112Z" level=error msg="Failed to destroy network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:18.924041 containerd[1465]: time="2025-05-09T00:37:18.924004164Z" level=error msg="encountered an error cleaning up failed sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:18.924209 containerd[1465]: time="2025-05-09T00:37:18.924069620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8dpj,Uid:054e0ea9-c254-4f90-a1c5-22ee92a19ac0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:18.924451 kubelet[1775]: E0509 00:37:18.924369 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:18.924548 kubelet[1775]: E0509 00:37:18.924477 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8dpj" May 9 00:37:18.924548 kubelet[1775]: E0509 00:37:18.924509 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8dpj" May 9 00:37:18.924628 kubelet[1775]: E0509 00:37:18.924589 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8dpj_calico-system(054e0ea9-c254-4f90-a1c5-22ee92a19ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8dpj_calico-system(054e0ea9-c254-4f90-a1c5-22ee92a19ac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:18.925556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54-shm.mount: Deactivated successfully. May 9 00:37:19.297690 kubelet[1775]: E0509 00:37:19.297462 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:19.732145 kubelet[1775]: I0509 00:37:19.731956 1775 topology_manager.go:215] "Topology Admit Handler" podUID="3a952d59-b485-4031-b443-6ceead4593f1" podNamespace="default" podName="nginx-deployment-85f456d6dd-z75gw" May 9 00:37:19.739660 systemd[1]: Created slice kubepods-besteffort-pod3a952d59_b485_4031_b443_6ceead4593f1.slice - libcontainer container kubepods-besteffort-pod3a952d59_b485_4031_b443_6ceead4593f1.slice. May 9 00:37:19.824491 kubelet[1775]: I0509 00:37:19.824429 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:19.825275 containerd[1465]: time="2025-05-09T00:37:19.825220915Z" level=info msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" May 9 00:37:19.825672 containerd[1465]: time="2025-05-09T00:37:19.825470299Z" level=info msg="Ensure that sandbox a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54 in task-service has been cleanup successfully" May 9 00:37:19.829605 kubelet[1775]: I0509 00:37:19.829556 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7zr\" (UniqueName: \"kubernetes.io/projected/3a952d59-b485-4031-b443-6ceead4593f1-kube-api-access-8b7zr\") pod \"nginx-deployment-85f456d6dd-z75gw\" (UID: \"3a952d59-b485-4031-b443-6ceead4593f1\") " pod="default/nginx-deployment-85f456d6dd-z75gw" May 9 00:37:19.862382 containerd[1465]: time="2025-05-09T00:37:19.862227106Z" level=error msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" failed" error="failed to destroy network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:19.862601 kubelet[1775]: E0509 00:37:19.862548 1775 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:19.862668 kubelet[1775]: E0509 00:37:19.862626 1775 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54"} May 9 00:37:19.862704 kubelet[1775]: E0509 00:37:19.862684 1775 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:37:19.862784 kubelet[1775]: E0509 00:37:19.862710 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"054e0ea9-c254-4f90-a1c5-22ee92a19ac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8dpj" podUID="054e0ea9-c254-4f90-a1c5-22ee92a19ac0" May 9 00:37:20.044558 containerd[1465]: time="2025-05-09T00:37:20.044327733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z75gw,Uid:3a952d59-b485-4031-b443-6ceead4593f1,Namespace:default,Attempt:0,}" May 9 00:37:20.117590 containerd[1465]: time="2025-05-09T00:37:20.117505897Z" level=error msg="Failed to destroy network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:20.120447 containerd[1465]: time="2025-05-09T00:37:20.118060463Z" level=error msg="encountered an error cleaning up failed sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:20.120447 containerd[1465]: time="2025-05-09T00:37:20.118148834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z75gw,Uid:3a952d59-b485-4031-b443-6ceead4593f1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:20.119665 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6-shm.mount: Deactivated successfully. May 9 00:37:20.120722 kubelet[1775]: E0509 00:37:20.119454 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:20.120722 kubelet[1775]: E0509 00:37:20.119538 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-z75gw" May 9 00:37:20.120722 kubelet[1775]: E0509 00:37:20.119559 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-z75gw" May 9 00:37:20.120834 kubelet[1775]: E0509 00:37:20.119599 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-z75gw_default(3a952d59-b485-4031-b443-6ceead4593f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-z75gw_default(3a952d59-b485-4031-b443-6ceead4593f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-z75gw" podUID="3a952d59-b485-4031-b443-6ceead4593f1" May 9 00:37:20.298068 kubelet[1775]: E0509 00:37:20.297856 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:20.828421 kubelet[1775]: I0509 00:37:20.828352 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:20.829997 containerd[1465]: time="2025-05-09T00:37:20.829907274Z" level=info msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" May 9 00:37:20.853228 containerd[1465]: time="2025-05-09T00:37:20.853091989Z" level=info msg="Ensure that sandbox bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6 in task-service has been cleanup successfully" May 9 00:37:20.913187 containerd[1465]: time="2025-05-09T00:37:20.913105445Z" level=error msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" failed" error="failed to destroy network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:37:20.913469 kubelet[1775]: E0509 00:37:20.913416 1775 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:20.913518 kubelet[1775]: E0509 00:37:20.913483 1775 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6"} May 9 00:37:20.913588 kubelet[1775]: E0509 00:37:20.913552 1775 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a952d59-b485-4031-b443-6ceead4593f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:37:20.913676 kubelet[1775]: E0509 00:37:20.913603 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a952d59-b485-4031-b443-6ceead4593f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-z75gw" podUID="3a952d59-b485-4031-b443-6ceead4593f1" May 9 00:37:21.298588 kubelet[1775]: E0509 00:37:21.298372 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:22.299639 kubelet[1775]: E0509 00:37:22.299561 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:23.300763 kubelet[1775]: E0509 00:37:23.300699 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:24.301592 kubelet[1775]: E0509 00:37:24.301513 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:24.940651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542529533.mount: Deactivated successfully. May 9 00:37:25.303371 kubelet[1775]: E0509 00:37:25.303135 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:25.434785 containerd[1465]: time="2025-05-09T00:37:25.434692045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:25.435611 containerd[1465]: time="2025-05-09T00:37:25.435568840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 9 00:37:25.436872 containerd[1465]: time="2025-05-09T00:37:25.436795904Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:25.439218 containerd[1465]: time="2025-05-09T00:37:25.439180453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:25.439998 containerd[1465]: time="2025-05-09T00:37:25.439937478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.618899266s" May 9 00:37:25.440081 containerd[1465]: time="2025-05-09T00:37:25.439986596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 9 00:37:25.666007 containerd[1465]: time="2025-05-09T00:37:25.665810540Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 00:37:25.682861 containerd[1465]: time="2025-05-09T00:37:25.682811766Z" level=info msg="CreateContainer within sandbox \"03a5a793bb4f61c8e6c8bb2d86a8f537d99e2d3b5682b5b640785624da245c73\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b88bba1aa7af3d38ab392492e96d5520b9d028cebeb68af14af7cbe887ff6b08\"" May 9 00:37:25.683520 containerd[1465]: time="2025-05-09T00:37:25.683469141Z" level=info msg="StartContainer for \"b88bba1aa7af3d38ab392492e96d5520b9d028cebeb68af14af7cbe887ff6b08\"" May 9 00:37:25.729669 systemd[1]: Started cri-containerd-b88bba1aa7af3d38ab392492e96d5520b9d028cebeb68af14af7cbe887ff6b08.scope - libcontainer container b88bba1aa7af3d38ab392492e96d5520b9d028cebeb68af14af7cbe887ff6b08. May 9 00:37:25.777727 containerd[1465]: time="2025-05-09T00:37:25.777668052Z" level=info msg="StartContainer for \"b88bba1aa7af3d38ab392492e96d5520b9d028cebeb68af14af7cbe887ff6b08\" returns successfully" May 9 00:37:25.839001 kubelet[1775]: E0509 00:37:25.838961 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:25.852435 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 00:37:25.853074 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 00:37:26.303573 kubelet[1775]: E0509 00:37:26.303439 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:26.842203 kubelet[1775]: E0509 00:37:26.840181 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:27.313706 kubelet[1775]: E0509 00:37:27.313510 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:27.485695 kernel: bpftool[2598]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 00:37:27.733860 systemd-networkd[1399]: vxlan.calico: Link UP May 9 00:37:27.733873 systemd-networkd[1399]: vxlan.calico: Gained carrier May 9 00:37:28.313900 kubelet[1775]: E0509 00:37:28.313839 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:29.187880 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL May 9 00:37:29.273017 update_engine[1448]: I20250509 00:37:29.272862 1448 update_attempter.cc:509] Updating boot flags... May 9 00:37:29.300623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2618) May 9 00:37:29.314233 kubelet[1775]: E0509 00:37:29.314192 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:29.334576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2618) May 9 00:37:29.362558 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2618) May 9 00:37:30.315965 kubelet[1775]: E0509 00:37:30.315876 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:30.579813 containerd[1465]: time="2025-05-09T00:37:30.579591181Z" level=info msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" May 9 00:37:30.638255 kubelet[1775]: I0509 00:37:30.638169 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rrtwk" podStartSLOduration=8.858273085 podStartE2EDuration="39.638145362s" podCreationTimestamp="2025-05-09 00:36:51 +0000 UTC" firstStartedPulling="2025-05-09 00:36:54.660957906 +0000 UTC m=+3.943889345" lastFinishedPulling="2025-05-09 00:37:25.440830183 +0000 UTC m=+34.723761622" observedRunningTime="2025-05-09 00:37:25.852842068 +0000 UTC m=+35.135773508" watchObservedRunningTime="2025-05-09 00:37:30.638145362 +0000 UTC m=+39.921076801" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.638 [INFO][2696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.638 [INFO][2696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" iface="eth0" netns="/var/run/netns/cni-ce17d686-04cb-b14e-eab4-1cc27a1cf1b2" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.638 [INFO][2696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" iface="eth0" netns="/var/run/netns/cni-ce17d686-04cb-b14e-eab4-1cc27a1cf1b2" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.639 [INFO][2696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" iface="eth0" netns="/var/run/netns/cni-ce17d686-04cb-b14e-eab4-1cc27a1cf1b2" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.639 [INFO][2696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.639 [INFO][2696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.668 [INFO][2704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.669 [INFO][2704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.669 [INFO][2704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.675 [WARNING][2704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.675 [INFO][2704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.678 [INFO][2704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:30.686172 containerd[1465]: 2025-05-09 00:37:30.683 [INFO][2696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:30.686898 containerd[1465]: time="2025-05-09T00:37:30.686424093Z" level=info msg="TearDown network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" successfully" May 9 00:37:30.686898 containerd[1465]: time="2025-05-09T00:37:30.686461547Z" level=info msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" returns successfully" May 9 00:37:30.687814 containerd[1465]: time="2025-05-09T00:37:30.687781632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8dpj,Uid:054e0ea9-c254-4f90-a1c5-22ee92a19ac0,Namespace:calico-system,Attempt:1,}" May 9 00:37:30.688739 systemd[1]: run-netns-cni\x2dce17d686\x2d04cb\x2db14e\x2deab4\x2d1cc27a1cf1b2.mount: Deactivated successfully. May 9 00:37:31.078967 kubelet[1775]: E0509 00:37:31.078902 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:31.316249 kubelet[1775]: E0509 00:37:31.316179 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:31.433918 systemd-networkd[1399]: calibbfc9dd4f65: Link UP May 9 00:37:31.437098 systemd-networkd[1399]: calibbfc9dd4f65: Gained carrier May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.353 [INFO][2712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.112-k8s-csi--node--driver--v8dpj-eth0 csi-node-driver- calico-system 054e0ea9-c254-4f90-a1c5-22ee92a19ac0 1022 0 2025-05-09 00:36:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.112 csi-node-driver-v8dpj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibbfc9dd4f65 [] []}} ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.353 [INFO][2712] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.385 [INFO][2728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" HandleID="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.394 [INFO][2728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" HandleID="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362350), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.112", "pod":"csi-node-driver-v8dpj", "timestamp":"2025-05-09 00:37:31.385568749 +0000 UTC"}, Hostname:"10.0.0.112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.394 [INFO][2728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.394 [INFO][2728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.394 [INFO][2728] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.112' May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.396 [INFO][2728] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.399 [INFO][2728] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.403 [INFO][2728] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.406 [INFO][2728] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.408 [INFO][2728] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.409 [INFO][2728] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.413 [INFO][2728] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.417 [INFO][2728] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.426 [INFO][2728] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.65/26] block=192.168.54.64/26 handle="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.426 [INFO][2728] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.65/26] handle="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" host="10.0.0.112" May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.426 [INFO][2728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:31.448683 containerd[1465]: 2025-05-09 00:37:31.426 [INFO][2728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.65/26] IPv6=[] ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" HandleID="k8s-pod-network.07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.429 [INFO][2712] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-csi--node--driver--v8dpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"054e0ea9-c254-4f90-a1c5-22ee92a19ac0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"", Pod:"csi-node-driver-v8dpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfc9dd4f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.429 [INFO][2712] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.65/32] ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.429 [INFO][2712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbfc9dd4f65 ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.437 [INFO][2712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.437 [INFO][2712] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-csi--node--driver--v8dpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"054e0ea9-c254-4f90-a1c5-22ee92a19ac0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d", Pod:"csi-node-driver-v8dpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfc9dd4f65", MAC:"f6:ab:8c:3f:7b:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:31.449402 containerd[1465]: 2025-05-09 00:37:31.444 [INFO][2712] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d" Namespace="calico-system" Pod="csi-node-driver-v8dpj" WorkloadEndpoint="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:31.470029 containerd[1465]: time="2025-05-09T00:37:31.469860232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:31.470029 containerd[1465]: time="2025-05-09T00:37:31.469972894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:31.470029 containerd[1465]: time="2025-05-09T00:37:31.469988334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:31.470316 containerd[1465]: time="2025-05-09T00:37:31.470094923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:31.501812 systemd[1]: Started cri-containerd-07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d.scope - libcontainer container 07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d. May 9 00:37:31.518900 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:37:31.536616 containerd[1465]: time="2025-05-09T00:37:31.536503020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8dpj,Uid:054e0ea9-c254-4f90-a1c5-22ee92a19ac0,Namespace:calico-system,Attempt:1,} returns sandbox id \"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d\"" May 9 00:37:31.538787 containerd[1465]: time="2025-05-09T00:37:31.538755235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 00:37:32.316799 kubelet[1775]: E0509 00:37:32.316756 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:32.963808 systemd-networkd[1399]: calibbfc9dd4f65: Gained IPv6LL May 9 00:37:33.051624 containerd[1465]: time="2025-05-09T00:37:33.051556948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:33.053898 containerd[1465]: time="2025-05-09T00:37:33.053837716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 9 00:37:33.055108 containerd[1465]: time="2025-05-09T00:37:33.055081120Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:33.057648 containerd[1465]: time="2025-05-09T00:37:33.057321148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:33.060552 containerd[1465]: time="2025-05-09T00:37:33.059563210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.520770071s" May 9 00:37:33.060552 containerd[1465]: time="2025-05-09T00:37:33.059608148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 9 00:37:33.063922 containerd[1465]: time="2025-05-09T00:37:33.063873946Z" level=info msg="CreateContainer within sandbox \"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 00:37:33.084017 containerd[1465]: time="2025-05-09T00:37:33.083946870Z" level=info msg="CreateContainer within sandbox \"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5cd8e1840f3e995e730f757d52e663fb9a906942135b6934988b34f24c3ae057\"" May 9 00:37:33.084556 containerd[1465]: time="2025-05-09T00:37:33.084489838Z" level=info msg="StartContainer for \"5cd8e1840f3e995e730f757d52e663fb9a906942135b6934988b34f24c3ae057\"" May 9 00:37:33.124712 systemd[1]: Started cri-containerd-5cd8e1840f3e995e730f757d52e663fb9a906942135b6934988b34f24c3ae057.scope - libcontainer container 5cd8e1840f3e995e730f757d52e663fb9a906942135b6934988b34f24c3ae057. May 9 00:37:33.163762 containerd[1465]: time="2025-05-09T00:37:33.163694289Z" level=info msg="StartContainer for \"5cd8e1840f3e995e730f757d52e663fb9a906942135b6934988b34f24c3ae057\" returns successfully" May 9 00:37:33.165952 containerd[1465]: time="2025-05-09T00:37:33.165690893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 00:37:33.316910 kubelet[1775]: E0509 00:37:33.316857 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:33.580510 containerd[1465]: time="2025-05-09T00:37:33.579902126Z" level=info msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.626 [INFO][2855] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.626 [INFO][2855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" iface="eth0" netns="/var/run/netns/cni-31a16fdc-2491-b0f8-d13a-a7869482df99" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.627 [INFO][2855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" iface="eth0" netns="/var/run/netns/cni-31a16fdc-2491-b0f8-d13a-a7869482df99" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.628 [INFO][2855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" iface="eth0" netns="/var/run/netns/cni-31a16fdc-2491-b0f8-d13a-a7869482df99" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.628 [INFO][2855] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.628 [INFO][2855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.654 [INFO][2864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.654 [INFO][2864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.654 [INFO][2864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.660 [WARNING][2864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.660 [INFO][2864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.662 [INFO][2864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:33.667773 containerd[1465]: 2025-05-09 00:37:33.664 [INFO][2855] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:33.668289 containerd[1465]: time="2025-05-09T00:37:33.668041931Z" level=info msg="TearDown network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" successfully" May 9 00:37:33.668289 containerd[1465]: time="2025-05-09T00:37:33.668076318Z" level=info msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" returns successfully" May 9 00:37:33.668939 containerd[1465]: time="2025-05-09T00:37:33.668910515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z75gw,Uid:3a952d59-b485-4031-b443-6ceead4593f1,Namespace:default,Attempt:1,}" May 9 00:37:33.670363 systemd[1]: run-netns-cni\x2d31a16fdc\x2d2491\x2db0f8\x2dd13a\x2da7869482df99.mount: Deactivated successfully. May 9 00:37:34.317294 kubelet[1775]: E0509 00:37:34.317222 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:35.213734 systemd-networkd[1399]: cali1f4683dcc75: Link UP May 9 00:37:35.214769 systemd-networkd[1399]: cali1f4683dcc75: Gained carrier May 9 00:37:35.318347 kubelet[1775]: E0509 00:37:35.318284 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.872 [INFO][2872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0 nginx-deployment-85f456d6dd- default 3a952d59-b485-4031-b443-6ceead4593f1 1053 0 2025-05-09 00:37:19 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.112 nginx-deployment-85f456d6dd-z75gw eth0 default [] [] [kns.default ksa.default.default] cali1f4683dcc75 [] []}} ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.872 [INFO][2872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.906 [INFO][2887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" HandleID="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.917 [INFO][2887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" HandleID="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9c40), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.112", "pod":"nginx-deployment-85f456d6dd-z75gw", "timestamp":"2025-05-09 00:37:34.906285062 +0000 UTC"}, Hostname:"10.0.0.112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.918 [INFO][2887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.918 [INFO][2887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.918 [INFO][2887] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.112' May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.920 [INFO][2887] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.926 [INFO][2887] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.932 [INFO][2887] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.934 [INFO][2887] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.936 [INFO][2887] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.937 [INFO][2887] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:34.938 [INFO][2887] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77 May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:35.126 [INFO][2887] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:35.207 [INFO][2887] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.66/26] block=192.168.54.64/26 handle="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:35.207 [INFO][2887] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.66/26] handle="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" host="10.0.0.112" May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:35.207 [INFO][2887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:35.337234 containerd[1465]: 2025-05-09 00:37:35.207 [INFO][2887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.66/26] IPv6=[] ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" HandleID="k8s-pod-network.c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.210 [INFO][2872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3a952d59-b485-4031-b443-6ceead4593f1", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-z75gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1f4683dcc75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.211 [INFO][2872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.66/32] ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.211 [INFO][2872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f4683dcc75 ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.214 [INFO][2872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.214 [INFO][2872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3a952d59-b485-4031-b443-6ceead4593f1", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77", Pod:"nginx-deployment-85f456d6dd-z75gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1f4683dcc75", MAC:"aa:05:b9:8c:e9:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:35.338458 containerd[1465]: 2025-05-09 00:37:35.333 [INFO][2872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77" Namespace="default" Pod="nginx-deployment-85f456d6dd-z75gw" WorkloadEndpoint="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:35.673378 containerd[1465]: time="2025-05-09T00:37:35.672491142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:35.673378 containerd[1465]: time="2025-05-09T00:37:35.672622638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:35.673778 containerd[1465]: time="2025-05-09T00:37:35.673520360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.673778 containerd[1465]: time="2025-05-09T00:37:35.673691332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.704823 systemd[1]: Started cri-containerd-c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77.scope - libcontainer container c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77. May 9 00:37:35.722866 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:37:35.752644 containerd[1465]: time="2025-05-09T00:37:35.752587181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z75gw,Uid:3a952d59-b485-4031-b443-6ceead4593f1,Namespace:default,Attempt:1,} returns sandbox id \"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77\"" May 9 00:37:36.318880 kubelet[1775]: E0509 00:37:36.318780 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:36.882790 containerd[1465]: time="2025-05-09T00:37:36.882720894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:36.883574 containerd[1465]: time="2025-05-09T00:37:36.883461819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 9 00:37:36.884665 containerd[1465]: time="2025-05-09T00:37:36.884626324Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:36.886908 containerd[1465]: time="2025-05-09T00:37:36.886869981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:36.887880 containerd[1465]: time="2025-05-09T00:37:36.887824008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.722081908s" May 9 00:37:36.887919 containerd[1465]: time="2025-05-09T00:37:36.887881590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 9 00:37:36.889206 containerd[1465]: time="2025-05-09T00:37:36.889169935Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:37:36.890651 containerd[1465]: time="2025-05-09T00:37:36.890600376Z" level=info msg="CreateContainer within sandbox \"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 00:37:36.907817 containerd[1465]: time="2025-05-09T00:37:36.907765104Z" level=info msg="CreateContainer within sandbox \"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2b57ebc0da555f4edbf711a4a1404bf25170b98a87654d7c9054ca85be232388\"" May 9 00:37:36.908478 containerd[1465]: time="2025-05-09T00:37:36.908431284Z" level=info msg="StartContainer for \"2b57ebc0da555f4edbf711a4a1404bf25170b98a87654d7c9054ca85be232388\"" May 9 00:37:36.948793 systemd[1]: Started cri-containerd-2b57ebc0da555f4edbf711a4a1404bf25170b98a87654d7c9054ca85be232388.scope - libcontainer container 2b57ebc0da555f4edbf711a4a1404bf25170b98a87654d7c9054ca85be232388. May 9 00:37:36.981848 containerd[1465]: time="2025-05-09T00:37:36.981703408Z" level=info msg="StartContainer for \"2b57ebc0da555f4edbf711a4a1404bf25170b98a87654d7c9054ca85be232388\" returns successfully" May 9 00:37:37.039762 systemd-networkd[1399]: cali1f4683dcc75: Gained IPv6LL May 9 00:37:37.320010 kubelet[1775]: E0509 00:37:37.319954 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:37.640397 kubelet[1775]: I0509 00:37:37.640257 1775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 00:37:37.640397 kubelet[1775]: I0509 00:37:37.640310 1775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 00:37:37.887180 kubelet[1775]: I0509 00:37:37.887121 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v8dpj" podStartSLOduration=41.536490705 podStartE2EDuration="46.887106514s" podCreationTimestamp="2025-05-09 00:36:51 +0000 UTC" firstStartedPulling="2025-05-09 00:37:31.538364679 +0000 UTC m=+40.821296118" lastFinishedPulling="2025-05-09 00:37:36.888980488 +0000 UTC m=+46.171911927" observedRunningTime="2025-05-09 00:37:37.886639892 +0000 UTC m=+47.169571331" watchObservedRunningTime="2025-05-09 00:37:37.887106514 +0000 UTC m=+47.170037953" May 9 00:37:38.320970 kubelet[1775]: E0509 00:37:38.320753 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:39.321672 kubelet[1775]: E0509 00:37:39.321598 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:40.322331 kubelet[1775]: E0509 00:37:40.322263 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:41.323444 kubelet[1775]: E0509 00:37:41.323391 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:41.414370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405467651.mount: Deactivated successfully. May 9 00:37:42.324671 kubelet[1775]: E0509 00:37:42.324546 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:43.325199 kubelet[1775]: E0509 00:37:43.325134 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:43.965624 containerd[1465]: time="2025-05-09T00:37:43.965555031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:43.966422 containerd[1465]: time="2025-05-09T00:37:43.966377455Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 9 00:37:43.967549 containerd[1465]: time="2025-05-09T00:37:43.967490226Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:43.970311 containerd[1465]: time="2025-05-09T00:37:43.970279176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:43.971345 containerd[1465]: time="2025-05-09T00:37:43.971308307Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 7.082096s" May 9 00:37:43.971397 containerd[1465]: time="2025-05-09T00:37:43.971348734Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:37:43.973327 containerd[1465]: time="2025-05-09T00:37:43.973295522Z" level=info msg="CreateContainer within sandbox \"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 9 00:37:44.124157 containerd[1465]: time="2025-05-09T00:37:44.124080636Z" level=info msg="CreateContainer within sandbox \"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281\"" May 9 00:37:44.124858 containerd[1465]: time="2025-05-09T00:37:44.124820361Z" level=info msg="StartContainer for \"4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281\"" May 9 00:37:44.200848 systemd[1]: run-containerd-runc-k8s.io-4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281-runc.aM4kVO.mount: Deactivated successfully. May 9 00:37:44.214733 systemd[1]: Started cri-containerd-4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281.scope - libcontainer container 4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281. May 9 00:37:44.325695 kubelet[1775]: E0509 00:37:44.325613 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:44.708660 containerd[1465]: time="2025-05-09T00:37:44.708424589Z" level=info msg="StartContainer for \"4d06666f1c39ec3f63c6082187cb04be6944309ced432b316758e7c94e66e281\" returns successfully" May 9 00:37:44.933000 kubelet[1775]: I0509 00:37:44.932912 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-z75gw" podStartSLOduration=17.71475999 podStartE2EDuration="25.932888141s" podCreationTimestamp="2025-05-09 00:37:19 +0000 UTC" firstStartedPulling="2025-05-09 00:37:35.754063667 +0000 UTC m=+45.036995116" lastFinishedPulling="2025-05-09 00:37:43.972191828 +0000 UTC m=+53.255123267" observedRunningTime="2025-05-09 00:37:44.932738745 +0000 UTC m=+54.215670194" watchObservedRunningTime="2025-05-09 00:37:44.932888141 +0000 UTC m=+54.215819580" May 9 00:37:44.954696 kubelet[1775]: E0509 00:37:44.954654 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:45.326928 kubelet[1775]: E0509 00:37:45.326857 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:46.327786 kubelet[1775]: E0509 00:37:46.327691 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:47.328708 kubelet[1775]: E0509 00:37:47.328630 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:48.329833 kubelet[1775]: E0509 00:37:48.329731 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:49.330155 kubelet[1775]: E0509 00:37:49.330095 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:49.332482 kubelet[1775]: I0509 00:37:49.332425 1775 topology_manager.go:215] "Topology Admit Handler" podUID="64b015b3-e233-43f2-91f4-20c2085cd4af" podNamespace="default" podName="nfs-server-provisioner-0" May 9 00:37:49.339156 systemd[1]: Created slice kubepods-besteffort-pod64b015b3_e233_43f2_91f4_20c2085cd4af.slice - libcontainer container kubepods-besteffort-pod64b015b3_e233_43f2_91f4_20c2085cd4af.slice. May 9 00:37:49.407133 kubelet[1775]: I0509 00:37:49.407059 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/64b015b3-e233-43f2-91f4-20c2085cd4af-data\") pod \"nfs-server-provisioner-0\" (UID: \"64b015b3-e233-43f2-91f4-20c2085cd4af\") " pod="default/nfs-server-provisioner-0" May 9 00:37:49.407133 kubelet[1775]: I0509 00:37:49.407115 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b66\" (UniqueName: \"kubernetes.io/projected/64b015b3-e233-43f2-91f4-20c2085cd4af-kube-api-access-89b66\") pod \"nfs-server-provisioner-0\" (UID: \"64b015b3-e233-43f2-91f4-20c2085cd4af\") " pod="default/nfs-server-provisioner-0" May 9 00:37:49.643115 containerd[1465]: time="2025-05-09T00:37:49.642958977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:64b015b3-e233-43f2-91f4-20c2085cd4af,Namespace:default,Attempt:0,}" May 9 00:37:49.933480 systemd-networkd[1399]: cali60e51b789ff: Link UP May 9 00:37:49.935796 systemd-networkd[1399]: cali60e51b789ff: Gained carrier May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.820 [INFO][3121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.112-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 64b015b3-e233-43f2-91f4-20c2085cd4af 1149 0 2025-05-09 00:37:49 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.112 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.820 [INFO][3121] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.848 [INFO][3136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" HandleID="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Workload="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.858 [INFO][3136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" HandleID="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Workload="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c9b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.112", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-09 00:37:49.848266404 +0000 UTC"}, Hostname:"10.0.0.112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.858 [INFO][3136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.858 [INFO][3136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.858 [INFO][3136] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.112' May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.860 [INFO][3136] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.865 [INFO][3136] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.869 [INFO][3136] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.871 [INFO][3136] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.873 [INFO][3136] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.873 [INFO][3136] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.875 [INFO][3136] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412 May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.921 [INFO][3136] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.927 [INFO][3136] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.67/26] block=192.168.54.64/26 handle="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.927 [INFO][3136] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.67/26] handle="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" host="10.0.0.112" May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.927 [INFO][3136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:49.968405 containerd[1465]: 2025-05-09 00:37:49.927 [INFO][3136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.67/26] IPv6=[] ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" HandleID="k8s-pod-network.8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Workload="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.969063 containerd[1465]: 2025-05-09 00:37:49.930 [INFO][3121] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"64b015b3-e233-43f2-91f4-20c2085cd4af", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:49.969063 containerd[1465]: 2025-05-09 00:37:49.930 [INFO][3121] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.67/32] ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.969063 containerd[1465]: 2025-05-09 00:37:49.931 [INFO][3121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.969063 containerd[1465]: 2025-05-09 00:37:49.933 [INFO][3121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:49.969220 containerd[1465]: 2025-05-09 00:37:49.934 [INFO][3121] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"64b015b3-e233-43f2-91f4-20c2085cd4af", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:41:f0:60:21:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:49.969220 containerd[1465]: 2025-05-09 00:37:49.965 [INFO][3121] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.112-k8s-nfs--server--provisioner--0-eth0" May 9 00:37:50.022982 containerd[1465]: time="2025-05-09T00:37:50.022839293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:50.022982 containerd[1465]: time="2025-05-09T00:37:50.022913106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:50.022982 containerd[1465]: time="2025-05-09T00:37:50.022928806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:50.023164 containerd[1465]: time="2025-05-09T00:37:50.023025894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:50.049746 systemd[1]: Started cri-containerd-8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412.scope - libcontainer container 8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412. May 9 00:37:50.061862 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:37:50.086810 containerd[1465]: time="2025-05-09T00:37:50.086764834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:64b015b3-e233-43f2-91f4-20c2085cd4af,Namespace:default,Attempt:0,} returns sandbox id \"8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412\"" May 9 00:37:50.089039 containerd[1465]: time="2025-05-09T00:37:50.088992574Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 9 00:37:50.331352 kubelet[1775]: E0509 00:37:50.331255 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:51.011799 systemd-networkd[1399]: cali60e51b789ff: Gained IPv6LL May 9 00:37:51.078392 kubelet[1775]: E0509 00:37:51.078326 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:51.093225 containerd[1465]: time="2025-05-09T00:37:51.093190517Z" level=info msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.195 [WARNING][3217] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-csi--node--driver--v8dpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"054e0ea9-c254-4f90-a1c5-22ee92a19ac0", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d", Pod:"csi-node-driver-v8dpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfc9dd4f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.195 [INFO][3217] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.195 [INFO][3217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" iface="eth0" netns="" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.195 [INFO][3217] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.195 [INFO][3217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.215 [INFO][3226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.215 [INFO][3226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.216 [INFO][3226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.221 [WARNING][3226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.221 [INFO][3226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.222 [INFO][3226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:51.227441 containerd[1465]: 2025-05-09 00:37:51.225 [INFO][3217] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.228056 containerd[1465]: time="2025-05-09T00:37:51.227469180Z" level=info msg="TearDown network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" successfully" May 9 00:37:51.228056 containerd[1465]: time="2025-05-09T00:37:51.227504488Z" level=info msg="StopPodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" returns successfully" May 9 00:37:51.228283 containerd[1465]: time="2025-05-09T00:37:51.228239491Z" level=info msg="RemovePodSandbox for \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" May 9 00:37:51.228343 containerd[1465]: time="2025-05-09T00:37:51.228283146Z" level=info msg="Forcibly stopping sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\"" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.276 [WARNING][3248] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-csi--node--driver--v8dpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"054e0ea9-c254-4f90-a1c5-22ee92a19ac0", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"07abb8d2d7eabf1138b8a2ff40774f6faaa068033a708f9da49f36935567506d", Pod:"csi-node-driver-v8dpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfc9dd4f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.276 [INFO][3248] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.276 [INFO][3248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" iface="eth0" netns="" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.276 [INFO][3248] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.276 [INFO][3248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.297 [INFO][3258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.297 [INFO][3258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.297 [INFO][3258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.302 [WARNING][3258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.302 [INFO][3258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" HandleID="k8s-pod-network.a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" Workload="10.0.0.112-k8s-csi--node--driver--v8dpj-eth0" May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.303 [INFO][3258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:51.308200 containerd[1465]: 2025-05-09 00:37:51.306 [INFO][3248] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54" May 9 00:37:51.308200 containerd[1465]: time="2025-05-09T00:37:51.308170172Z" level=info msg="TearDown network for sandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" successfully" May 9 00:37:51.332477 kubelet[1775]: E0509 00:37:51.332415 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:51.517636 containerd[1465]: time="2025-05-09T00:37:51.517584134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:37:51.517794 containerd[1465]: time="2025-05-09T00:37:51.517655472Z" level=info msg="RemovePodSandbox \"a2a535a1f442dff08e7823ee9872c4d191cffa772467f598fd78792a8f0a1a54\" returns successfully" May 9 00:37:51.518428 containerd[1465]: time="2025-05-09T00:37:51.518367542Z" level=info msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.556 [WARNING][3285] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3a952d59-b485-4031-b443-6ceead4593f1", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77", Pod:"nginx-deployment-85f456d6dd-z75gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1f4683dcc75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.557 [INFO][3285] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.557 [INFO][3285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" iface="eth0" netns="" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.557 [INFO][3285] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.557 [INFO][3285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.580 [INFO][3294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.580 [INFO][3294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.580 [INFO][3294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.585 [WARNING][3294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.585 [INFO][3294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.588 [INFO][3294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:51.592944 containerd[1465]: 2025-05-09 00:37:51.590 [INFO][3285] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.592944 containerd[1465]: time="2025-05-09T00:37:51.592708913Z" level=info msg="TearDown network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" successfully" May 9 00:37:51.592944 containerd[1465]: time="2025-05-09T00:37:51.592748348Z" level=info msg="StopPodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" returns successfully" May 9 00:37:51.593454 containerd[1465]: time="2025-05-09T00:37:51.593175847Z" level=info msg="RemovePodSandbox for \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" May 9 00:37:51.593454 containerd[1465]: time="2025-05-09T00:37:51.593210945Z" level=info msg="Forcibly stopping sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\"" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.629 [WARNING][3320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"3a952d59-b485-4031-b443-6ceead4593f1", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"c0d1c4babcc7bc85445cba7cfc3968286a04b5bfcd7b80f46ea5bb928c1a8f77", Pod:"nginx-deployment-85f456d6dd-z75gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1f4683dcc75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.629 [INFO][3320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.629 [INFO][3320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" iface="eth0" netns="" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.629 [INFO][3320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.629 [INFO][3320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.651 [INFO][3328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.652 [INFO][3328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.652 [INFO][3328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.657 [WARNING][3328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.657 [INFO][3328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" HandleID="k8s-pod-network.bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" Workload="10.0.0.112-k8s-nginx--deployment--85f456d6dd--z75gw-eth0" May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.659 [INFO][3328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:37:51.663941 containerd[1465]: 2025-05-09 00:37:51.661 [INFO][3320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6" May 9 00:37:51.663941 containerd[1465]: time="2025-05-09T00:37:51.663911752Z" level=info msg="TearDown network for sandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" successfully" May 9 00:37:51.773317 containerd[1465]: time="2025-05-09T00:37:51.773242287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:37:51.773317 containerd[1465]: time="2025-05-09T00:37:51.773307553Z" level=info msg="RemovePodSandbox \"bf3c5605d6534c10589993acb7e3abdb56bf0ecf894bc896d63e17ab5cfbc8c6\" returns successfully" May 9 00:37:52.314559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751265882.mount: Deactivated successfully. May 9 00:37:52.333510 kubelet[1775]: E0509 00:37:52.333413 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:53.333893 kubelet[1775]: E0509 00:37:53.333807 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:54.335004 kubelet[1775]: E0509 00:37:54.334940 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:54.594304 containerd[1465]: time="2025-05-09T00:37:54.594138629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.594938 containerd[1465]: time="2025-05-09T00:37:54.594851126Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 9 00:37:54.596068 containerd[1465]: time="2025-05-09T00:37:54.596029013Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.598702 containerd[1465]: time="2025-05-09T00:37:54.598665598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:54.599731 containerd[1465]: time="2025-05-09T00:37:54.599696140Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.510657928s" May 9 00:37:54.599781 containerd[1465]: time="2025-05-09T00:37:54.599727040Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 9 00:37:54.602204 containerd[1465]: time="2025-05-09T00:37:54.602175632Z" level=info msg="CreateContainer within sandbox \"8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 9 00:37:54.613782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390460062.mount: Deactivated successfully. May 9 00:37:54.617082 containerd[1465]: time="2025-05-09T00:37:54.617041011Z" level=info msg="CreateContainer within sandbox \"8546e4ee6c7faa90b031fc25428b6a6d6e51328d81d8446cdaf15e0e8750f412\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"04e572cb0cfad04c45e9c297c221514cc92e78e2a0e3e890327b997b3fca884d\"" May 9 00:37:54.617494 containerd[1465]: time="2025-05-09T00:37:54.617471263Z" level=info msg="StartContainer for \"04e572cb0cfad04c45e9c297c221514cc92e78e2a0e3e890327b997b3fca884d\"" May 9 00:37:54.650683 systemd[1]: Started cri-containerd-04e572cb0cfad04c45e9c297c221514cc92e78e2a0e3e890327b997b3fca884d.scope - libcontainer container 04e572cb0cfad04c45e9c297c221514cc92e78e2a0e3e890327b997b3fca884d. May 9 00:37:54.680596 containerd[1465]: time="2025-05-09T00:37:54.678977851Z" level=info msg="StartContainer for \"04e572cb0cfad04c45e9c297c221514cc92e78e2a0e3e890327b997b3fca884d\" returns successfully" May 9 00:37:55.336114 kubelet[1775]: E0509 00:37:55.336000 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:56.336329 kubelet[1775]: E0509 00:37:56.336273 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:57.337415 kubelet[1775]: E0509 00:37:57.337344 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:58.338054 kubelet[1775]: E0509 00:37:58.337964 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:37:59.338561 kubelet[1775]: E0509 00:37:59.338449 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:00.339476 kubelet[1775]: E0509 00:38:00.339402 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:01.340647 kubelet[1775]: E0509 00:38:01.340557 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:02.341342 kubelet[1775]: E0509 00:38:02.341266 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:03.342365 kubelet[1775]: E0509 00:38:03.342288 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:04.343519 kubelet[1775]: E0509 00:38:04.343430 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:04.450113 kubelet[1775]: I0509 00:38:04.450017 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.937989567 podStartE2EDuration="15.449991364s" podCreationTimestamp="2025-05-09 00:37:49 +0000 UTC" firstStartedPulling="2025-05-09 00:37:50.088502404 +0000 UTC m=+59.371433843" lastFinishedPulling="2025-05-09 00:37:54.600504201 +0000 UTC m=+63.883435640" observedRunningTime="2025-05-09 00:37:54.932314704 +0000 UTC m=+64.215246143" watchObservedRunningTime="2025-05-09 00:38:04.449991364 +0000 UTC m=+73.732922803" May 9 00:38:04.450347 kubelet[1775]: I0509 00:38:04.450275 1775 topology_manager.go:215] "Topology Admit Handler" podUID="83e58c0d-df98-4872-829b-04ac8b9214d4" podNamespace="default" podName="test-pod-1" May 9 00:38:04.458957 systemd[1]: Created slice kubepods-besteffort-pod83e58c0d_df98_4872_829b_04ac8b9214d4.slice - libcontainer container kubepods-besteffort-pod83e58c0d_df98_4872_829b_04ac8b9214d4.slice. May 9 00:38:04.597208 kubelet[1775]: I0509 00:38:04.597029 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2v7\" (UniqueName: \"kubernetes.io/projected/83e58c0d-df98-4872-829b-04ac8b9214d4-kube-api-access-sd2v7\") pod \"test-pod-1\" (UID: \"83e58c0d-df98-4872-829b-04ac8b9214d4\") " pod="default/test-pod-1" May 9 00:38:04.597208 kubelet[1775]: I0509 00:38:04.597081 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff9ecdf7-6e85-40f3-a8b1-91640dc40f21\" (UniqueName: \"kubernetes.io/nfs/83e58c0d-df98-4872-829b-04ac8b9214d4-pvc-ff9ecdf7-6e85-40f3-a8b1-91640dc40f21\") pod \"test-pod-1\" (UID: \"83e58c0d-df98-4872-829b-04ac8b9214d4\") " pod="default/test-pod-1" May 9 00:38:04.724561 kernel: FS-Cache: Loaded May 9 00:38:04.796164 kernel: RPC: Registered named UNIX socket transport module. May 9 00:38:04.796300 kernel: RPC: Registered udp transport module. May 9 00:38:04.796329 kernel: RPC: Registered tcp transport module. May 9 00:38:04.796355 kernel: RPC: Registered tcp-with-tls transport module. May 9 00:38:04.796862 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 9 00:38:05.086737 kernel: NFS: Registering the id_resolver key type May 9 00:38:05.086901 kernel: Key type id_resolver registered May 9 00:38:05.086937 kernel: Key type id_legacy registered May 9 00:38:05.115248 nfsidmap[3462]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:38:05.120911 nfsidmap[3465]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 9 00:38:05.344811 kubelet[1775]: E0509 00:38:05.344638 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:05.362597 containerd[1465]: time="2025-05-09T00:38:05.362509979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83e58c0d-df98-4872-829b-04ac8b9214d4,Namespace:default,Attempt:0,}" May 9 00:38:05.477683 systemd-networkd[1399]: cali5ec59c6bf6e: Link UP May 9 00:38:05.477950 systemd-networkd[1399]: cali5ec59c6bf6e: Gained carrier May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.410 [INFO][3469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.112-k8s-test--pod--1-eth0 default 83e58c0d-df98-4872-829b-04ac8b9214d4 1227 0 2025-05-09 00:37:49 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.112 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.410 [INFO][3469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.438 [INFO][3482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" HandleID="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Workload="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.446 [INFO][3482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" HandleID="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Workload="10.0.0.112-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004379e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.112", "pod":"test-pod-1", "timestamp":"2025-05-09 00:38:05.438459905 +0000 UTC"}, Hostname:"10.0.0.112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.446 [INFO][3482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.446 [INFO][3482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.446 [INFO][3482] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.112' May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.447 [INFO][3482] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.451 [INFO][3482] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.455 [INFO][3482] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.456 [INFO][3482] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.458 [INFO][3482] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.458 [INFO][3482] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.460 [INFO][3482] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604 May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.464 [INFO][3482] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.469 [INFO][3482] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.68/26] block=192.168.54.64/26 handle="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.469 [INFO][3482] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.68/26] handle="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" host="10.0.0.112" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.469 [INFO][3482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.469 [INFO][3482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.68/26] IPv6=[] ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" HandleID="k8s-pod-network.bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Workload="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.485938 containerd[1465]: 2025-05-09 00:38:05.472 [INFO][3469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"83e58c0d-df98-4872-829b-04ac8b9214d4", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:38:05.487055 containerd[1465]: 2025-05-09 00:38:05.472 [INFO][3469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.68/32] ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.487055 containerd[1465]: 2025-05-09 00:38:05.472 [INFO][3469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.487055 containerd[1465]: 2025-05-09 00:38:05.476 [INFO][3469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.487055 containerd[1465]: 2025-05-09 00:38:05.476 [INFO][3469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.112-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"83e58c0d-df98-4872-829b-04ac8b9214d4", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.112", ContainerID:"bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2a:f2:6a:2c:46:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:38:05.487055 containerd[1465]: 2025-05-09 00:38:05.482 [INFO][3469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.112-k8s-test--pod--1-eth0" May 9 00:38:05.509443 containerd[1465]: time="2025-05-09T00:38:05.509226870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:05.509443 containerd[1465]: time="2025-05-09T00:38:05.509295231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:05.509443 containerd[1465]: time="2025-05-09T00:38:05.509308977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:05.509443 containerd[1465]: time="2025-05-09T00:38:05.509394682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:05.535694 systemd[1]: Started cri-containerd-bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604.scope - libcontainer container bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604. May 9 00:38:05.549804 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:38:05.576204 containerd[1465]: time="2025-05-09T00:38:05.576134371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83e58c0d-df98-4872-829b-04ac8b9214d4,Namespace:default,Attempt:0,} returns sandbox id \"bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604\"" May 9 00:38:05.578724 containerd[1465]: time="2025-05-09T00:38:05.578684924Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 9 00:38:05.990970 containerd[1465]: time="2025-05-09T00:38:05.990918187Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:38:05.991670 containerd[1465]: time="2025-05-09T00:38:05.991617338Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 9 00:38:05.994094 containerd[1465]: time="2025-05-09T00:38:05.994056526Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 415.20855ms" May 9 00:38:05.994094 containerd[1465]: time="2025-05-09T00:38:05.994085702Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 9 00:38:05.996079 containerd[1465]: time="2025-05-09T00:38:05.996047175Z" level=info msg="CreateContainer within sandbox \"bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 9 00:38:06.017714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869907203.mount: Deactivated successfully. May 9 00:38:06.021290 containerd[1465]: time="2025-05-09T00:38:06.021230145Z" level=info msg="CreateContainer within sandbox \"bc197994bb27908e1f86c0a95b3a3f277efa48e52d84c905f2bdf142d6577604\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c9410d38c5c2404e520f165b6efdbde291c1d83706c921498047bee3906f5b80\"" May 9 00:38:06.021864 containerd[1465]: time="2025-05-09T00:38:06.021833692Z" level=info msg="StartContainer for \"c9410d38c5c2404e520f165b6efdbde291c1d83706c921498047bee3906f5b80\"" May 9 00:38:06.052851 systemd[1]: Started cri-containerd-c9410d38c5c2404e520f165b6efdbde291c1d83706c921498047bee3906f5b80.scope - libcontainer container c9410d38c5c2404e520f165b6efdbde291c1d83706c921498047bee3906f5b80. May 9 00:38:06.081721 containerd[1465]: time="2025-05-09T00:38:06.081663845Z" level=info msg="StartContainer for \"c9410d38c5c2404e520f165b6efdbde291c1d83706c921498047bee3906f5b80\" returns successfully" May 9 00:38:06.345974 kubelet[1775]: E0509 00:38:06.345792 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:06.940017 kubelet[1775]: I0509 00:38:06.939935 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.52319708 podStartE2EDuration="17.939916549s" podCreationTimestamp="2025-05-09 00:37:49 +0000 UTC" firstStartedPulling="2025-05-09 00:38:05.578061447 +0000 UTC m=+74.860992886" lastFinishedPulling="2025-05-09 00:38:05.994780916 +0000 UTC m=+75.277712355" observedRunningTime="2025-05-09 00:38:06.939746312 +0000 UTC m=+76.222677772" watchObservedRunningTime="2025-05-09 00:38:06.939916549 +0000 UTC m=+76.222847988" May 9 00:38:07.075790 systemd-networkd[1399]: cali5ec59c6bf6e: Gained IPv6LL May 9 00:38:07.346325 kubelet[1775]: E0509 00:38:07.346250 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:08.347169 kubelet[1775]: E0509 00:38:08.347059 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:09.347929 kubelet[1775]: E0509 00:38:09.347849 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:10.348670 kubelet[1775]: E0509 00:38:10.348616 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:11.078898 kubelet[1775]: E0509 00:38:11.078821 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:11.349653 kubelet[1775]: E0509 00:38:11.349410 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 9 00:38:12.349599 kubelet[1775]: E0509 00:38:12.349544 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"