May 13 23:55:54.924301 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:55:54.924324 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:55:54.924333 kernel: BIOS-provided physical RAM map: May 13 23:55:54.924340 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 13 23:55:54.924346 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 13 23:55:54.924355 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 13 23:55:54.924363 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 13 23:55:54.924370 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 13 23:55:54.924376 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 13 23:55:54.924383 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 13 23:55:54.924390 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 13 23:55:54.924397 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 13 23:55:54.924403 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 13 23:55:54.924410 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 13 23:55:54.924421 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 13 23:55:54.924428 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 13 23:55:54.924435 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:55:54.924442 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:55:54.924449 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:55:54.924459 kernel: NX (Execute Disable) protection: active May 13 23:55:54.924466 kernel: APIC: Static calls initialized May 13 23:55:54.924473 kernel: e820: update [mem 0x9a186018-0x9a18fc57] usable ==> usable May 13 23:55:54.924480 kernel: e820: update [mem 0x9a186018-0x9a18fc57] usable ==> usable May 13 23:55:54.924488 kernel: e820: update [mem 0x9a149018-0x9a185e57] usable ==> usable May 13 23:55:54.924495 kernel: e820: update [mem 0x9a149018-0x9a185e57] usable ==> usable May 13 23:55:54.924501 kernel: extended physical RAM map: May 13 23:55:54.924509 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 13 23:55:54.924516 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 13 23:55:54.924523 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 13 23:55:54.924530 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 13 23:55:54.924540 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a149017] usable May 13 23:55:54.924547 kernel: reserve setup_data: [mem 0x000000009a149018-0x000000009a185e57] usable May 13 23:55:54.924554 kernel: reserve setup_data: [mem 0x000000009a185e58-0x000000009a186017] usable May 13 23:55:54.924562 kernel: reserve setup_data: [mem 0x000000009a186018-0x000000009a18fc57] usable May 13 23:55:54.924569 kernel: reserve setup_data: [mem 0x000000009a18fc58-0x000000009b8ecfff] usable May 13 23:55:54.924576 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 13 23:55:54.924583 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 13 23:55:54.924590 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 13 23:55:54.924597 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 13 23:55:54.924605 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 13 23:55:54.924618 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 13 23:55:54.924625 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 13 23:55:54.924633 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 13 23:55:54.924640 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:55:54.924647 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:55:54.924655 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:55:54.924664 kernel: efi: EFI v2.7 by EDK II May 13 23:55:54.924672 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 May 13 23:55:54.924680 kernel: random: crng init done May 13 23:55:54.924687 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:55:54.924694 kernel: secureboot: Secure boot enabled May 13 23:55:54.924702 kernel: SMBIOS 2.8 present. May 13 23:55:54.924709 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 13 23:55:54.924717 kernel: Hypervisor detected: KVM May 13 23:55:54.924724 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:55:54.924732 kernel: kvm-clock: using sched offset of 4520881780 cycles May 13 23:55:54.924739 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:55:54.924749 kernel: tsc: Detected 2794.746 MHz processor May 13 23:55:54.924757 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:55:54.924765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:55:54.924773 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 13 23:55:54.924781 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:55:54.924788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:55:54.924796 kernel: Using GB pages for direct mapping May 13 23:55:54.924804 kernel: ACPI: Early table checksum verification disabled May 13 23:55:54.924811 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 13 23:55:54.924821 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 23:55:54.924829 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924836 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924844 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 13 23:55:54.924852 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924859 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924867 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924875 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:55:54.924883 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 23:55:54.924892 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 13 23:55:54.924900 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] May 13 23:55:54.924908 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 13 23:55:54.924915 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 13 23:55:54.924923 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 13 23:55:54.924930 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 13 23:55:54.924938 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 13 23:55:54.924945 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 13 23:55:54.924953 kernel: No NUMA configuration found May 13 23:55:54.924963 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 13 23:55:54.924971 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] May 13 23:55:54.924988 kernel: Zone ranges: May 13 23:55:54.924996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:55:54.925004 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 13 23:55:54.925012 kernel: Normal empty May 13 23:55:54.925019 kernel: Movable zone start for each node May 13 23:55:54.925027 kernel: Early memory node ranges May 13 23:55:54.925034 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 13 23:55:54.925042 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 13 23:55:54.925052 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 13 23:55:54.925059 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 13 23:55:54.925067 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 13 23:55:54.925086 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 13 23:55:54.925094 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:55:54.925101 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 13 23:55:54.925109 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:55:54.925116 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 13 23:55:54.925124 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 13 23:55:54.925148 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 13 23:55:54.925155 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:55:54.925163 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:55:54.925171 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:55:54.925179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:55:54.925186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:55:54.925194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:55:54.925202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:55:54.925209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:55:54.925219 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:55:54.925227 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:55:54.925235 kernel: TSC deadline timer available May 13 23:55:54.925242 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 23:55:54.925250 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:55:54.925258 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 23:55:54.925273 kernel: kvm-guest: setup PV sched yield May 13 23:55:54.925283 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 13 23:55:54.925290 kernel: Booting paravirtualized kernel on KVM May 13 23:55:54.925299 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:55:54.925307 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 23:55:54.925315 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 23:55:54.925436 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 23:55:54.925444 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 23:55:54.925451 kernel: kvm-guest: PV spinlocks enabled May 13 23:55:54.925459 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:55:54.925469 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:55:54.925477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:55:54.925485 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:55:54.925493 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:55:54.925503 kernel: Fallback order for Node 0: 0 May 13 23:55:54.925511 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 May 13 23:55:54.925519 kernel: Policy zone: DMA32 May 13 23:55:54.925527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:55:54.925535 kernel: Memory: 2368308K/2552216K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 183652K reserved, 0K cma-reserved) May 13 23:55:54.925546 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:55:54.925554 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:55:54.925561 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:55:54.925569 kernel: Dynamic Preempt: voluntary May 13 23:55:54.925577 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:55:54.925586 kernel: rcu: RCU event tracing is enabled. May 13 23:55:54.925594 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:55:54.925602 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:55:54.925610 kernel: Rude variant of Tasks RCU enabled. May 13 23:55:54.925620 kernel: Tracing variant of Tasks RCU enabled. May 13 23:55:54.925628 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:55:54.925636 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:55:54.925644 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 23:55:54.925652 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:55:54.925660 kernel: Console: colour dummy device 80x25 May 13 23:55:54.925668 kernel: printk: console [ttyS0] enabled May 13 23:55:54.925676 kernel: ACPI: Core revision 20230628 May 13 23:55:54.925684 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:55:54.925695 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:55:54.925702 kernel: x2apic enabled May 13 23:55:54.925710 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:55:54.925718 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 23:55:54.925727 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 23:55:54.925734 kernel: kvm-guest: setup PV IPIs May 13 23:55:54.925743 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:55:54.925751 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 23:55:54.925759 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 23:55:54.925770 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 23:55:54.925778 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 23:55:54.925785 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 23:55:54.925794 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:55:54.925801 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:55:54.925809 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:55:54.925817 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 23:55:54.925825 kernel: RETBleed: Mitigation: untrained return thunk May 13 23:55:54.925834 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:55:54.925844 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:55:54.925852 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 23:55:54.925860 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 23:55:54.925868 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 23:55:54.925876 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:55:54.925884 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:55:54.925892 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:55:54.925900 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:55:54.925910 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 23:55:54.925918 kernel: Freeing SMP alternatives memory: 32K May 13 23:55:54.925926 kernel: pid_max: default: 32768 minimum: 301 May 13 23:55:54.925934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:55:54.925942 kernel: landlock: Up and running. May 13 23:55:54.925950 kernel: SELinux: Initializing. May 13 23:55:54.925958 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:55:54.925966 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:55:54.925983 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 23:55:54.925994 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:55:54.926003 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:55:54.926011 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:55:54.926019 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 23:55:54.926027 kernel: ... version: 0 May 13 23:55:54.926035 kernel: ... bit width: 48 May 13 23:55:54.926043 kernel: ... generic registers: 6 May 13 23:55:54.926050 kernel: ... value mask: 0000ffffffffffff May 13 23:55:54.926058 kernel: ... max period: 00007fffffffffff May 13 23:55:54.926069 kernel: ... fixed-purpose events: 0 May 13 23:55:54.926094 kernel: ... event mask: 000000000000003f May 13 23:55:54.926102 kernel: signal: max sigframe size: 1776 May 13 23:55:54.926110 kernel: rcu: Hierarchical SRCU implementation. May 13 23:55:54.926118 kernel: rcu: Max phase no-delay instances is 400. May 13 23:55:54.926126 kernel: smp: Bringing up secondary CPUs ... May 13 23:55:54.926134 kernel: smpboot: x86: Booting SMP configuration: May 13 23:55:54.926141 kernel: .... node #0, CPUs: #1 #2 #3 May 13 23:55:54.926149 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:55:54.926160 kernel: smpboot: Max logical packages: 1 May 13 23:55:54.926168 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 23:55:54.926176 kernel: devtmpfs: initialized May 13 23:55:54.926184 kernel: x86/mm: Memory block size: 128MB May 13 23:55:54.926192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 13 23:55:54.926200 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 13 23:55:54.926208 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:55:54.926216 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:55:54.926224 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:55:54.926235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:55:54.926243 kernel: audit: initializing netlink subsys (disabled) May 13 23:55:54.926251 kernel: audit: type=2000 audit(1747180555.101:1): state=initialized audit_enabled=0 res=1 May 13 23:55:54.926259 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:55:54.926267 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:55:54.926275 kernel: cpuidle: using governor menu May 13 23:55:54.926283 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:55:54.926291 kernel: dca service started, version 1.12.1 May 13 23:55:54.926299 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 13 23:55:54.926309 kernel: PCI: Using configuration type 1 for base access May 13 23:55:54.926317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:55:54.926325 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:55:54.926333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:55:54.926341 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:55:54.926349 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:55:54.926357 kernel: ACPI: Added _OSI(Module Device) May 13 23:55:54.926365 kernel: ACPI: Added _OSI(Processor Device) May 13 23:55:54.926372 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:55:54.926383 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:55:54.926391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:55:54.926398 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:55:54.926406 kernel: ACPI: Interpreter enabled May 13 23:55:54.926414 kernel: ACPI: PM: (supports S0 S5) May 13 23:55:54.926422 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:55:54.926430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:55:54.926438 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:55:54.926446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 23:55:54.926456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:55:54.926637 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:55:54.926770 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 23:55:54.926895 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 23:55:54.926906 kernel: PCI host bridge to bus 0000:00 May 13 23:55:54.927044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:55:54.927208 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:55:54.927330 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:55:54.927457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 13 23:55:54.927573 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 13 23:55:54.927687 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 13 23:55:54.927801 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:55:54.927942 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 23:55:54.928113 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 23:55:54.928244 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 23:55:54.928368 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 23:55:54.928558 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 23:55:54.928686 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 23:55:54.928822 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:55:54.928961 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:55:54.929120 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 23:55:54.929277 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 23:55:54.929434 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 13 23:55:54.929591 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 23:55:54.929729 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 23:55:54.929856 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 23:55:54.930050 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 13 23:55:54.930212 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:55:54.930341 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 23:55:54.930467 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 23:55:54.930594 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 13 23:55:54.930721 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 23:55:54.930855 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 23:55:54.930991 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 23:55:54.931242 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 23:55:54.931429 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 23:55:54.931589 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 23:55:54.931726 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 23:55:54.931854 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 23:55:54.931865 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:55:54.931877 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:55:54.931886 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:55:54.931894 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:55:54.931902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 23:55:54.931910 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 23:55:54.931918 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 23:55:54.931926 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 23:55:54.931934 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 23:55:54.931942 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 23:55:54.931952 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 23:55:54.931960 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 23:55:54.931968 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 23:55:54.931985 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 23:55:54.931993 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 23:55:54.932001 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 23:55:54.932009 kernel: iommu: Default domain type: Translated May 13 23:55:54.932017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:55:54.932025 kernel: efivars: Registered efivars operations May 13 23:55:54.932036 kernel: PCI: Using ACPI for IRQ routing May 13 23:55:54.932044 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:55:54.932052 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 13 23:55:54.932060 kernel: e820: reserve RAM buffer [mem 0x9a149018-0x9bffffff] May 13 23:55:54.932068 kernel: e820: reserve RAM buffer [mem 0x9a186018-0x9bffffff] May 13 23:55:54.932090 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 13 23:55:54.932098 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 13 23:55:54.932226 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 23:55:54.932351 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 23:55:54.932481 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:55:54.932491 kernel: vgaarb: loaded May 13 23:55:54.932499 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:55:54.932508 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:55:54.932516 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:55:54.932524 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:55:54.932532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:55:54.932540 kernel: pnp: PnP ACPI init May 13 23:55:54.932678 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 13 23:55:54.932693 kernel: pnp: PnP ACPI: found 6 devices May 13 23:55:54.932701 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:55:54.932709 kernel: NET: Registered PF_INET protocol family May 13 23:55:54.932717 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:55:54.932725 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:55:54.932733 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:55:54.932741 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:55:54.932750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:55:54.932760 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:55:54.932778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:55:54.932793 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:55:54.932810 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:55:54.932825 kernel: NET: Registered PF_XDP protocol family May 13 23:55:54.932992 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 23:55:54.933149 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 23:55:54.933266 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:55:54.933385 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:55:54.933498 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:55:54.933611 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 13 23:55:54.933751 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 13 23:55:54.933879 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 13 23:55:54.933890 kernel: PCI: CLS 0 bytes, default 64 May 13 23:55:54.933898 kernel: Initialise system trusted keyrings May 13 23:55:54.933906 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:55:54.933914 kernel: Key type asymmetric registered May 13 23:55:54.933926 kernel: Asymmetric key parser 'x509' registered May 13 23:55:54.933934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:55:54.933942 kernel: io scheduler mq-deadline registered May 13 23:55:54.933950 kernel: io scheduler kyber registered May 13 23:55:54.933958 kernel: io scheduler bfq registered May 13 23:55:54.933966 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:55:54.933987 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 23:55:54.934016 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 23:55:54.934028 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 23:55:54.934038 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:55:54.934047 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:55:54.934055 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:55:54.934063 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:55:54.934071 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:55:54.934219 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 23:55:54.934231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:55:54.934349 kernel: rtc_cmos 00:04: registered as rtc0 May 13 23:55:54.934477 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T23:55:54 UTC (1747180554) May 13 23:55:54.934597 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 23:55:54.934608 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 23:55:54.934616 kernel: efifb: probing for efifb May 13 23:55:54.934625 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 23:55:54.934633 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 23:55:54.934641 kernel: efifb: scrolling: redraw May 13 23:55:54.934650 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:55:54.934661 kernel: Console: switching to colour frame buffer device 160x50 May 13 23:55:54.934670 kernel: fb0: EFI VGA frame buffer device May 13 23:55:54.934678 kernel: pstore: Using crash dump compression: deflate May 13 23:55:54.934686 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:55:54.934694 kernel: NET: Registered PF_INET6 protocol family May 13 23:55:54.934703 kernel: Segment Routing with IPv6 May 13 23:55:54.934711 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:55:54.934719 kernel: NET: Registered PF_PACKET protocol family May 13 23:55:54.934727 kernel: Key type dns_resolver registered May 13 23:55:54.934735 kernel: IPI shorthand broadcast: enabled May 13 23:55:54.934746 kernel: sched_clock: Marking stable (675004275, 155141226)->(910311474, -80165973) May 13 23:55:54.934756 kernel: registered taskstats version 1 May 13 23:55:54.934765 kernel: Loading compiled-in X.509 certificates May 13 23:55:54.934773 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:55:54.934781 kernel: Key type .fscrypt registered May 13 23:55:54.934792 kernel: Key type fscrypt-provisioning registered May 13 23:55:54.934800 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:55:54.934808 kernel: ima: Allocated hash algorithm: sha1 May 13 23:55:54.934817 kernel: ima: No architecture policies found May 13 23:55:54.934825 kernel: clk: Disabling unused clocks May 13 23:55:54.934833 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:55:54.934841 kernel: Write protecting the kernel read-only data: 40960k May 13 23:55:54.934850 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:55:54.934858 kernel: Run /init as init process May 13 23:55:54.934868 kernel: with arguments: May 13 23:55:54.934876 kernel: /init May 13 23:55:54.934884 kernel: with environment: May 13 23:55:54.934892 kernel: HOME=/ May 13 23:55:54.934900 kernel: TERM=linux May 13 23:55:54.934908 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:55:54.934918 systemd[1]: Successfully made /usr/ read-only. May 13 23:55:54.934929 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:55:54.934941 systemd[1]: Detected virtualization kvm. May 13 23:55:54.934950 systemd[1]: Detected architecture x86-64. May 13 23:55:54.934958 systemd[1]: Running in initrd. May 13 23:55:54.934967 systemd[1]: No hostname configured, using default hostname. May 13 23:55:54.934984 systemd[1]: Hostname set to . May 13 23:55:54.934993 systemd[1]: Initializing machine ID from VM UUID. May 13 23:55:54.935002 systemd[1]: Queued start job for default target initrd.target. May 13 23:55:54.935011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:55:54.935022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:55:54.935032 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:55:54.935041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:55:54.935050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:55:54.935060 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:55:54.935070 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:55:54.935140 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:55:54.935153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:55:54.935162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:55:54.935171 systemd[1]: Reached target paths.target - Path Units. May 13 23:55:54.935180 systemd[1]: Reached target slices.target - Slice Units. May 13 23:55:54.935189 systemd[1]: Reached target swap.target - Swaps. May 13 23:55:54.935197 systemd[1]: Reached target timers.target - Timer Units. May 13 23:55:54.935206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:55:54.935215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:55:54.935226 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:55:54.935235 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:55:54.935244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:55:54.935253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:55:54.935261 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:55:54.935270 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:55:54.935279 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:55:54.935288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:55:54.935297 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:55:54.935308 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:55:54.935317 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:55:54.935326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:55:54.935335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:55:54.935347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:55:54.935359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:55:54.935371 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:55:54.935405 systemd-journald[191]: Collecting audit messages is disabled. May 13 23:55:54.935428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:55:54.935437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:55:54.935446 systemd-journald[191]: Journal started May 13 23:55:54.935470 systemd-journald[191]: Runtime Journal (/run/log/journal/701e0c9ee7eb48d08311356ac1cc81e1) is 6M, max 47.9M, 41.9M free. May 13 23:55:54.931621 systemd-modules-load[192]: Inserted module 'overlay' May 13 23:55:54.938122 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:55:54.943276 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:55:54.944711 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:55:54.948614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:55:54.952594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:55:54.963109 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:55:54.965580 systemd-modules-load[192]: Inserted module 'br_netfilter' May 13 23:55:54.966886 kernel: Bridge firewalling registered May 13 23:55:54.970708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:55:54.975047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:55:54.979868 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:55:54.984605 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:55:54.991769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:55:54.994167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:55:55.012310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:55:55.013632 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:55:55.041129 dracut-cmdline[230]: dracut-dracut-053 May 13 23:55:55.045141 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:55:55.050277 systemd-resolved[224]: Positive Trust Anchors: May 13 23:55:55.050286 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:55:55.050318 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:55:55.054111 systemd-resolved[224]: Defaulting to hostname 'linux'. May 13 23:55:55.055710 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:55:55.062397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:55:55.147126 kernel: SCSI subsystem initialized May 13 23:55:55.159138 kernel: Loading iSCSI transport class v2.0-870. May 13 23:55:55.171117 kernel: iscsi: registered transport (tcp) May 13 23:55:55.193518 kernel: iscsi: registered transport (qla4xxx) May 13 23:55:55.193642 kernel: QLogic iSCSI HBA Driver May 13 23:55:55.251324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:55:55.253482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:55:55.298284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:55:55.298365 kernel: device-mapper: uevent: version 1.0.3 May 13 23:55:55.298383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:55:55.346131 kernel: raid6: avx2x4 gen() 20619 MB/s May 13 23:55:55.363112 kernel: raid6: avx2x2 gen() 19586 MB/s May 13 23:55:55.380271 kernel: raid6: avx2x1 gen() 19248 MB/s May 13 23:55:55.380345 kernel: raid6: using algorithm avx2x4 gen() 20619 MB/s May 13 23:55:55.398295 kernel: raid6: .... xor() 8107 MB/s, rmw enabled May 13 23:55:55.398352 kernel: raid6: using avx2x2 recovery algorithm May 13 23:55:55.424175 kernel: xor: automatically using best checksumming function avx May 13 23:55:55.601135 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:55:55.616827 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:55:55.619273 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:55:55.652582 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 13 23:55:55.659667 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:55:55.681163 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:55:55.708042 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 13 23:55:55.745668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:55:55.747701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:55:55.827020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:55:55.830926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:55:55.871315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:55:55.875319 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 23:55:55.879137 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:55:55.876831 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:55:55.886401 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:55:55.886450 kernel: GPT:9289727 != 19775487 May 13 23:55:55.886467 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:55:55.886482 kernel: GPT:9289727 != 19775487 May 13 23:55:55.886496 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:55:55.881875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:55:55.890377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:55:55.888235 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:55:55.894424 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:55:55.898092 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:55:55.911117 kernel: libata version 3.00 loaded. May 13 23:55:55.958821 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:55:55.961805 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:55:55.963113 kernel: AES CTR mode by8 optimization enabled May 13 23:55:55.967109 kernel: ahci 0000:00:1f.2: version 3.0 May 13 23:55:55.969366 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 23:55:55.973013 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:55:55.973275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:55:55.982467 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 23:55:55.982661 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) May 13 23:55:55.982680 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 23:55:55.980461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:55:55.986456 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (474) May 13 23:55:55.983591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:55:56.024844 kernel: scsi host0: ahci May 13 23:55:56.025124 kernel: scsi host1: ahci May 13 23:55:56.025290 kernel: scsi host2: ahci May 13 23:55:56.025436 kernel: scsi host3: ahci May 13 23:55:55.983832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:55:56.030553 kernel: scsi host4: ahci May 13 23:55:56.030792 kernel: scsi host5: ahci May 13 23:55:56.031008 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 23:55:56.031025 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 23:55:56.031040 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 23:55:55.991675 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:55:56.035363 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 23:55:56.035385 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 23:55:56.035396 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 23:55:56.000609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:55:56.027602 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:55:56.042024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:55:56.068379 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:55:56.082136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:55:56.092010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:55:56.110610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:55:56.113126 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:55:56.114453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:55:56.114510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:55:56.115785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:55:56.118984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:55:56.120912 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:55:56.141387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:55:56.150933 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:55:56.192628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:55:56.345957 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 23:55:56.346043 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 23:55:56.346056 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:55:56.346067 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:55:56.347139 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 23:55:56.348118 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 23:55:56.349114 kernel: ata3.00: applying bridge limits May 13 23:55:56.349143 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:55:56.350118 kernel: ata3.00: configured for UDMA/100 May 13 23:55:56.351115 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 23:55:56.388368 disk-uuid[555]: Primary Header is updated. May 13 23:55:56.388368 disk-uuid[555]: Secondary Entries is updated. May 13 23:55:56.388368 disk-uuid[555]: Secondary Header is updated. May 13 23:55:56.393128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:55:56.393193 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 23:55:56.394551 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:55:56.427126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:55:56.433129 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 23:55:57.453184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:55:57.453861 disk-uuid[579]: The operation has completed successfully. May 13 23:55:57.491670 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:55:57.491791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:55:57.541604 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:55:57.559833 sh[594]: Success May 13 23:55:57.602113 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 23:55:57.637893 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:55:57.645494 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:55:57.661365 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:55:57.672204 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:55:57.672234 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:55:57.672252 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:55:57.673239 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:55:57.674615 kernel: BTRFS info (device dm-0): using free space tree May 13 23:55:57.678902 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:55:57.681408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:55:57.682362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:55:57.685480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:55:57.717275 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:55:57.717340 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:55:57.717360 kernel: BTRFS info (device vda6): using free space tree May 13 23:55:57.721106 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:55:57.725103 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:55:57.782334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:55:57.795425 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:55:57.838207 systemd-networkd[770]: lo: Link UP May 13 23:55:57.838217 systemd-networkd[770]: lo: Gained carrier May 13 23:55:57.839890 systemd-networkd[770]: Enumeration completed May 13 23:55:57.840234 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:55:57.840268 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:55:57.840273 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:55:57.845173 systemd-networkd[770]: eth0: Link UP May 13 23:55:57.845177 systemd-networkd[770]: eth0: Gained carrier May 13 23:55:57.845183 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:55:57.846874 systemd[1]: Reached target network.target - Network. May 13 23:55:57.859156 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:55:57.898763 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:55:57.900793 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:55:57.970684 ignition[775]: Ignition 2.20.0 May 13 23:55:57.970697 ignition[775]: Stage: fetch-offline May 13 23:55:57.970740 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 13 23:55:57.970750 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:55:57.970849 ignition[775]: parsed url from cmdline: "" May 13 23:55:57.970853 ignition[775]: no config URL provided May 13 23:55:57.970858 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:55:57.970868 ignition[775]: no config at "/usr/lib/ignition/user.ign" May 13 23:55:57.970897 ignition[775]: op(1): [started] loading QEMU firmware config module May 13 23:55:57.970902 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:55:57.984069 ignition[775]: op(1): [finished] loading QEMU firmware config module May 13 23:55:58.028735 ignition[775]: parsing config with SHA512: 01439ca3435ed86bf3327f1a78ab0318da21a6b7843394f15b1618ae84bf91bd0955de20441c0dcd0cfa49331f50ef3cfbb489798aa21204bc0b72c81cb7e442 May 13 23:55:58.035485 unknown[775]: fetched base config from "system" May 13 23:55:58.035932 unknown[775]: fetched user config from "qemu" May 13 23:55:58.036418 ignition[775]: fetch-offline: fetch-offline passed May 13 23:55:58.036505 ignition[775]: Ignition finished successfully May 13 23:55:58.038745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:55:58.040824 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:55:58.041795 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:55:58.070344 ignition[785]: Ignition 2.20.0 May 13 23:55:58.070359 ignition[785]: Stage: kargs May 13 23:55:58.070568 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 13 23:55:58.070584 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:55:58.071677 ignition[785]: kargs: kargs passed May 13 23:55:58.071734 ignition[785]: Ignition finished successfully May 13 23:55:58.075429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:55:58.078464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:55:58.104675 ignition[794]: Ignition 2.20.0 May 13 23:55:58.104690 ignition[794]: Stage: disks May 13 23:55:58.104863 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 13 23:55:58.104875 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:55:58.107771 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:55:58.105658 ignition[794]: disks: disks passed May 13 23:55:58.110124 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:55:58.105707 ignition[794]: Ignition finished successfully May 13 23:55:58.112219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:55:58.113850 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:55:58.115695 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:55:58.116992 systemd[1]: Reached target basic.target - Basic System. May 13 23:55:58.120395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:55:58.151033 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:55:58.166065 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:55:58.169654 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:55:58.281110 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:55:58.281729 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:55:58.283224 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:55:58.285944 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:55:58.287618 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:55:58.288916 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:55:58.288956 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:55:58.288977 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:55:58.307944 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:55:58.311826 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:55:58.317932 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 13 23:55:58.317959 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:55:58.317974 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:55:58.317995 kernel: BTRFS info (device vda6): using free space tree May 13 23:55:58.318009 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:55:58.320891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:55:58.349193 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:55:58.353578 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 13 23:55:58.357918 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:55:58.362354 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:55:58.441516 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:55:58.443890 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:55:58.445732 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:55:58.475138 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:55:58.485536 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:55:58.578177 ignition[929]: INFO : Ignition 2.20.0 May 13 23:55:58.578177 ignition[929]: INFO : Stage: mount May 13 23:55:58.580121 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:55:58.580121 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:55:58.580121 ignition[929]: INFO : mount: mount passed May 13 23:55:58.580121 ignition[929]: INFO : Ignition finished successfully May 13 23:55:58.581483 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:55:58.584586 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:55:58.671610 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:55:58.673311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:55:58.710120 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) May 13 23:55:58.710186 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:55:58.712689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:55:58.712715 kernel: BTRFS info (device vda6): using free space tree May 13 23:55:58.715105 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:55:58.716905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:55:58.757688 ignition[956]: INFO : Ignition 2.20.0 May 13 23:55:58.758931 ignition[956]: INFO : Stage: files May 13 23:55:58.758931 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:55:58.758931 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:55:58.762822 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 13 23:55:58.762822 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:55:58.762822 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:55:58.762822 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:55:58.762822 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:55:58.762822 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:55:58.762803 unknown[956]: wrote ssh authorized keys file for user: core May 13 23:55:58.772206 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:55:58.772206 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:55:58.848106 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:55:59.129903 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:55:59.129903 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:55:59.133958 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:55:59.133958 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:55:59.133958 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:55:59.133958 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:55:59.141158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:55:59.141158 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:55:59.144799 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:55:59.146756 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:55:59.148795 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:55:59.150723 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:55:59.153327 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:55:59.155800 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:55:59.157971 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 23:55:59.297264 systemd-networkd[770]: eth0: Gained IPv6LL May 13 23:55:59.519628 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:56:00.029395 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:56:00.029395 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:56:00.033667 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:56:00.053901 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:56:00.060043 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:56:00.062266 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:56:00.062266 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:56:00.062266 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:56:00.062266 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:56:00.062266 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:56:00.062266 ignition[956]: INFO : files: files passed May 13 23:56:00.062266 ignition[956]: INFO : Ignition finished successfully May 13 23:56:00.074038 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:56:00.075575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:56:00.079125 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:56:00.104722 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:56:00.104882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:56:00.108554 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:56:00.110193 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:00.110193 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:00.113846 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:00.112169 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:56:00.115530 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:56:00.119058 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:56:00.184286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:56:00.184433 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:56:00.186968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:56:00.189072 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:56:00.191430 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:56:00.192357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:56:00.218316 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:56:00.221209 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:56:00.244405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:56:00.245760 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:56:00.248197 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:56:00.250352 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:56:00.250472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:56:00.252788 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:56:00.254508 systemd[1]: Stopped target basic.target - Basic System. May 13 23:56:00.256511 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:56:00.258609 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:56:00.260634 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:56:00.262765 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:56:00.264843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:56:00.267156 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:56:00.269367 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:56:00.271652 systemd[1]: Stopped target swap.target - Swaps. May 13 23:56:00.273639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:56:00.273794 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:56:00.276338 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:56:00.278244 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:56:00.280758 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:56:00.280895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:56:00.283746 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:56:00.283923 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:56:00.286536 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:56:00.286681 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:56:00.289039 systemd[1]: Stopped target paths.target - Path Units. May 13 23:56:00.291066 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:56:00.295161 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:56:00.297460 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:56:00.299534 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:56:00.301325 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:56:00.301444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:56:00.303362 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:56:00.303445 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:56:00.305881 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:56:00.306017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:56:00.307976 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:56:00.308099 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:56:00.311068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:56:00.312973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:56:00.313135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:56:00.316266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:56:00.317355 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:56:00.317530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:56:00.319968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:56:00.320188 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:56:00.328933 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:56:00.329109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:56:00.337683 ignition[1012]: INFO : Ignition 2.20.0 May 13 23:56:00.337683 ignition[1012]: INFO : Stage: umount May 13 23:56:00.337683 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:56:00.337683 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:56:00.337683 ignition[1012]: INFO : umount: umount passed May 13 23:56:00.337683 ignition[1012]: INFO : Ignition finished successfully May 13 23:56:00.339415 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:56:00.339590 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:56:00.342023 systemd[1]: Stopped target network.target - Network. May 13 23:56:00.344232 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:56:00.344313 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:56:00.346116 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:56:00.346177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:56:00.348612 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:56:00.348664 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:56:00.349366 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:56:00.349428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:56:00.349872 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:56:00.350527 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:56:00.352137 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:56:00.355381 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:56:00.355542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:56:00.359871 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:56:00.360317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:56:00.360384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:56:00.363197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:56:00.367849 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:56:00.368011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:56:00.372985 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:56:00.373269 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:56:00.373324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:56:00.376560 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:56:00.377819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:56:00.377906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:56:00.378460 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:56:00.378522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:56:00.380752 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:56:00.380807 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:56:00.383215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:56:00.390517 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:56:00.400959 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:56:00.401114 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:56:00.405964 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:56:00.406183 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:56:00.408674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:56:00.408726 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:56:00.410865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:56:00.410906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:56:00.412896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:56:00.412968 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:56:00.415202 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:56:00.415254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:56:00.417125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:56:00.417187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:56:00.420656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:56:00.422648 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:56:00.422716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:56:00.425057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:56:00.425154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:00.442400 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:56:00.442530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:56:00.535776 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:56:00.535928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:56:00.538550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:56:00.540637 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:56:00.540693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:56:00.543907 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:56:00.562552 systemd[1]: Switching root. May 13 23:56:00.600304 systemd-journald[191]: Journal stopped May 13 23:56:01.962660 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). May 13 23:56:01.962750 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:56:01.962770 kernel: SELinux: policy capability open_perms=1 May 13 23:56:01.962793 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:56:01.962826 kernel: SELinux: policy capability always_check_network=0 May 13 23:56:01.962842 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:56:01.962856 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:56:01.962869 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:56:01.962884 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:56:01.962901 kernel: audit: type=1403 audit(1747180561.049:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:56:01.962923 systemd[1]: Successfully loaded SELinux policy in 43.175ms. May 13 23:56:01.962962 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.483ms. May 13 23:56:01.962983 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:56:01.963005 systemd[1]: Detected virtualization kvm. May 13 23:56:01.963024 systemd[1]: Detected architecture x86-64. May 13 23:56:01.963041 systemd[1]: Detected first boot. May 13 23:56:01.963058 systemd[1]: Initializing machine ID from VM UUID. May 13 23:56:01.963167 zram_generator::config[1059]: No configuration found. May 13 23:56:01.963196 kernel: Guest personality initialized and is inactive May 13 23:56:01.963214 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:56:01.963231 kernel: Initialized host personality May 13 23:56:01.963249 kernel: NET: Registered PF_VSOCK protocol family May 13 23:56:01.963263 systemd[1]: Populated /etc with preset unit settings. May 13 23:56:01.963279 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:56:01.963293 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:56:01.963307 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:56:01.963320 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:56:01.963334 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:56:01.963348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:56:01.963363 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:56:01.963385 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:56:01.963401 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:56:01.963417 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:56:01.963433 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:56:01.963449 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:56:01.963465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:56:01.963482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:56:01.963498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:56:01.963515 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:56:01.963535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:56:01.963552 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:56:01.963568 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:56:01.963584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:56:01.963597 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:56:01.963609 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:56:01.963621 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:56:01.963636 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:56:01.963648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:56:01.963667 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:56:01.963679 systemd[1]: Reached target slices.target - Slice Units. May 13 23:56:01.963691 systemd[1]: Reached target swap.target - Swaps. May 13 23:56:01.963705 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:56:01.963717 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:56:01.963729 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:56:01.963741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:56:01.963758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:56:01.963774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:56:01.963790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:56:01.963808 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:56:01.963838 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:56:01.963856 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:56:01.963874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:01.963892 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:56:01.963911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:56:01.963934 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:56:01.963953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:56:01.963972 systemd[1]: Reached target machines.target - Containers. May 13 23:56:01.963990 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:56:01.964007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:01.964034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:56:01.964046 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:56:01.964059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:01.964099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:56:01.964112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:01.964125 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:56:01.964148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:01.964167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:56:01.964182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:56:01.964197 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:56:01.964212 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:56:01.964227 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:56:01.964248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:01.964264 kernel: fuse: init (API version 7.39) May 13 23:56:01.964277 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:56:01.964294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:56:01.964306 kernel: loop: module loaded May 13 23:56:01.964319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:56:01.964331 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:56:01.964343 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:56:01.964355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:56:01.964371 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:56:01.964383 systemd[1]: Stopped verity-setup.service. May 13 23:56:01.964395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:01.964434 systemd-journald[1134]: Collecting audit messages is disabled. May 13 23:56:01.964463 kernel: ACPI: bus type drm_connector registered May 13 23:56:01.964478 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:56:01.964493 systemd-journald[1134]: Journal started May 13 23:56:01.964522 systemd-journald[1134]: Runtime Journal (/run/log/journal/701e0c9ee7eb48d08311356ac1cc81e1) is 6M, max 47.9M, 41.9M free. May 13 23:56:01.964572 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:56:01.688239 systemd[1]: Queued start job for default target multi-user.target. May 13 23:56:01.703848 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:56:01.704405 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:56:01.968727 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:56:01.969573 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:56:01.971209 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:56:01.972485 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:56:01.973794 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:56:01.975182 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:56:01.976770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:56:01.978371 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:56:01.978601 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:56:01.980146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:01.980361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:01.981871 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:56:01.982103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:56:01.983539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:01.983762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:01.985337 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:56:01.985548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:56:01.986983 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:01.987214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:01.988668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:56:01.990171 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:56:01.991985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:56:01.994054 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:56:02.012005 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:56:02.015650 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:56:02.018551 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:56:02.020143 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:56:02.020220 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:56:02.023000 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:56:02.028979 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:56:02.031810 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:56:02.033293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:02.035245 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:56:02.048305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:56:02.051202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:02.053703 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:56:02.055182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:02.060421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:56:02.062733 systemd-journald[1134]: Time spent on flushing to /var/log/journal/701e0c9ee7eb48d08311356ac1cc81e1 is 13.929ms for 1025 entries. May 13 23:56:02.062733 systemd-journald[1134]: System Journal (/var/log/journal/701e0c9ee7eb48d08311356ac1cc81e1) is 8M, max 195.6M, 187.6M free. May 13 23:56:02.551671 systemd-journald[1134]: Received client request to flush runtime journal. May 13 23:56:02.551752 kernel: loop0: detected capacity change from 0 to 151640 May 13 23:56:02.551783 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:56:02.551811 kernel: loop1: detected capacity change from 0 to 205544 May 13 23:56:02.551832 kernel: loop2: detected capacity change from 0 to 109808 May 13 23:56:02.551857 kernel: loop3: detected capacity change from 0 to 151640 May 13 23:56:02.551874 kernel: loop4: detected capacity change from 0 to 205544 May 13 23:56:02.551890 kernel: loop5: detected capacity change from 0 to 109808 May 13 23:56:02.551909 zram_generator::config[1229]: No configuration found. May 13 23:56:02.066292 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:56:02.069215 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:56:02.072228 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:56:02.106286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:56:02.108650 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:56:02.110285 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:56:02.116539 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:56:02.145488 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:56:02.158499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:56:02.175319 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:56:02.181369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:56:02.277328 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 13 23:56:02.277346 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 13 23:56:02.284017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:56:02.345153 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:56:02.345859 (sd-merge)[1195]: Merged extensions into '/usr'. May 13 23:56:02.350623 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:56:02.350635 systemd[1]: Reloading... May 13 23:56:02.620548 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:56:02.642436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:02.717309 systemd[1]: Reloading finished in 365 ms. May 13 23:56:02.738052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:56:02.740235 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:56:02.742348 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:56:02.744775 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:56:02.753500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:56:02.765847 systemd[1]: Starting ensure-sysext.service... May 13 23:56:02.768072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:56:02.773302 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:56:02.800119 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... May 13 23:56:02.800147 systemd[1]: Reloading... May 13 23:56:02.820674 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:56:02.821002 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:56:02.822052 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:56:02.822387 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. May 13 23:56:02.822473 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. May 13 23:56:02.842742 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:56:02.842756 systemd-tmpfiles[1268]: Skipping /boot May 13 23:56:02.860950 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:56:02.861163 systemd-tmpfiles[1268]: Skipping /boot May 13 23:56:02.865096 zram_generator::config[1298]: No configuration found. May 13 23:56:03.004107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:03.095901 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:56:03.096597 systemd[1]: Reloading finished in 296 ms. May 13 23:56:03.109529 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:56:03.129294 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:56:03.131130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:56:03.143958 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:56:03.146821 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:56:03.168020 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:56:03.173175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:56:03.176057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:56:03.179312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:56:03.185382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:03.185588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:03.191354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:03.195711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:03.199232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:03.200619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:03.200835 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:03.204151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:56:03.205907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:03.208111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:03.208776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:03.211363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:03.211621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:03.213918 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:03.214166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:03.226665 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:56:03.230320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:03.230638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:03.231656 systemd-udevd[1343]: Using default interface naming scheme 'v255'. May 13 23:56:03.233883 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:56:03.238438 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:56:03.239019 augenrules[1373]: No rules May 13 23:56:03.243715 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:56:03.243997 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:56:03.253426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:03.256347 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:56:03.257756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:03.263484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:03.272158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:56:03.275709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:03.284584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:03.286720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:03.286938 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:03.287175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:03.290199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:56:03.292552 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:56:03.294435 augenrules[1380]: /sbin/augenrules: No change May 13 23:56:03.296628 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:56:03.299770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:03.300420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:03.303206 augenrules[1420]: No rules May 13 23:56:03.302959 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:56:03.303433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:56:03.306041 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:56:03.306381 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:56:03.308314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:03.308550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:03.310594 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:56:03.312618 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:03.312947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:03.323665 systemd[1]: Finished ensure-sysext.service. May 13 23:56:03.339472 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:56:03.345228 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:56:03.346540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:03.346596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:03.348883 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:56:03.350235 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:56:03.378144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1402) May 13 23:56:03.428221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:56:03.434156 kernel: ACPI: button: Power Button [PWRF] May 13 23:56:03.443566 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 23:56:03.443850 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 23:56:03.444038 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 23:56:03.444272 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 23:56:03.453497 systemd-resolved[1342]: Positive Trust Anchors: May 13 23:56:03.453903 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:56:03.454004 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:56:03.459290 systemd-networkd[1441]: lo: Link UP May 13 23:56:03.459308 systemd-networkd[1441]: lo: Gained carrier May 13 23:56:03.461190 systemd-networkd[1441]: Enumeration completed May 13 23:56:03.461605 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:56:03.461609 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:56:03.462056 systemd-resolved[1342]: Defaulting to hostname 'linux'. May 13 23:56:03.462326 systemd-networkd[1441]: eth0: Link UP May 13 23:56:03.462330 systemd-networkd[1441]: eth0: Gained carrier May 13 23:56:03.462343 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:56:03.463502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:56:03.467455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:56:03.466810 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:56:03.469418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:56:03.475142 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:56:03.478206 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:56:03.484201 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:56:03.485650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:56:03.487024 systemd[1]: Reached target network.target - Network. May 13 23:56:03.487508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:56:03.492801 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:56:03.493738 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:56:04.856377 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:56:04.856447 systemd-timesyncd[1445]: Initial clock synchronization to Tue 2025-05-13 23:56:04.856245 UTC. May 13 23:56:04.856725 systemd-resolved[1342]: Clock change detected. Flushing caches. May 13 23:56:04.858210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:56:04.864552 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:56:04.930142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:04.957104 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:56:04.969318 kernel: kvm_amd: TSC scaling supported May 13 23:56:04.969353 kernel: kvm_amd: Nested Virtualization enabled May 13 23:56:04.969373 kernel: kvm_amd: Nested Paging enabled May 13 23:56:04.970343 kernel: kvm_amd: LBR virtualization supported May 13 23:56:04.970366 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 23:56:04.971537 kernel: kvm_amd: Virtual GIF supported May 13 23:56:04.994104 kernel: EDAC MC: Ver: 3.0.0 May 13 23:56:05.027822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:05.039647 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:56:05.043355 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:56:05.065953 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:56:05.099978 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:56:05.101618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:56:05.103072 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:56:05.104277 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:56:05.105575 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:56:05.107265 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:56:05.108574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:56:05.110116 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:56:05.111452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:56:05.111481 systemd[1]: Reached target paths.target - Path Units. May 13 23:56:05.112446 systemd[1]: Reached target timers.target - Timer Units. May 13 23:56:05.114313 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:56:05.117275 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:56:05.121053 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:56:05.122604 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:56:05.123979 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:56:05.129469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:56:05.131225 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:56:05.133848 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:56:05.135943 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:56:05.137416 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:56:05.138622 systemd[1]: Reached target basic.target - Basic System. May 13 23:56:05.139900 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:56:05.139930 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:56:05.141390 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:56:05.144061 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:56:05.146178 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:56:05.148413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:56:05.152505 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:56:05.153844 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:56:05.155288 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:56:05.162819 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:56:05.163455 jq[1477]: false May 13 23:56:05.166219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:56:05.172748 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:56:05.179669 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:56:05.182089 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:56:05.183200 extend-filesystems[1478]: Found loop3 May 13 23:56:05.183200 extend-filesystems[1478]: Found loop4 May 13 23:56:05.183200 extend-filesystems[1478]: Found loop5 May 13 23:56:05.183200 extend-filesystems[1478]: Found sr0 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda May 13 23:56:05.183200 extend-filesystems[1478]: Found vda1 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda2 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda3 May 13 23:56:05.183200 extend-filesystems[1478]: Found usr May 13 23:56:05.183200 extend-filesystems[1478]: Found vda4 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda6 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda7 May 13 23:56:05.183200 extend-filesystems[1478]: Found vda9 May 13 23:56:05.183200 extend-filesystems[1478]: Checking size of /dev/vda9 May 13 23:56:05.182712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:56:05.187704 dbus-daemon[1476]: [system] SELinux support is enabled May 13 23:56:05.184139 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:56:05.193174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:56:05.196271 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:56:05.202800 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:56:05.204821 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:56:05.205236 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:56:05.205665 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:56:05.206076 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:56:05.206533 update_engine[1491]: I20250513 23:56:05.206429 1491 main.cc:92] Flatcar Update Engine starting May 13 23:56:05.208515 update_engine[1491]: I20250513 23:56:05.208408 1491 update_check_scheduler.cc:74] Next update check in 10m52s May 13 23:56:05.210065 extend-filesystems[1478]: Resized partition /dev/vda9 May 13 23:56:05.213843 extend-filesystems[1499]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:56:05.226429 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:56:05.226467 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1392) May 13 23:56:05.218016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:56:05.226630 jq[1493]: true May 13 23:56:05.218399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:56:05.249152 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:56:05.259125 jq[1506]: true May 13 23:56:05.265615 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:56:05.282062 tar[1498]: linux-amd64/helm May 13 23:56:05.286151 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:56:05.286151 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:56:05.286151 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:56:05.285753 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:56:05.292216 extend-filesystems[1478]: Resized filesystem in /dev/vda9 May 13 23:56:05.286089 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:56:05.296306 systemd[1]: Started update-engine.service - Update Engine. May 13 23:56:05.302103 systemd-logind[1488]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:56:05.302150 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:56:05.302559 systemd-logind[1488]: New seat seat0. May 13 23:56:05.308942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:56:05.308980 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:56:05.311259 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:56:05.311281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:56:05.319472 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:56:05.321796 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:56:05.369392 bash[1533]: Updated "/home/core/.ssh/authorized_keys" May 13 23:56:05.370799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:56:05.374328 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:56:05.384549 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:56:05.405615 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:56:05.435830 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:56:05.439198 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:56:05.463922 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:56:05.464221 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:56:05.467453 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:56:05.492214 containerd[1508]: time="2025-05-13T23:56:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:56:05.493363 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:56:05.493917 containerd[1508]: time="2025-05-13T23:56:05.493870109Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:56:05.498512 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:56:05.501091 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:56:05.503061 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:56:05.504252 containerd[1508]: time="2025-05-13T23:56:05.504212455Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.145µs" May 13 23:56:05.504318 containerd[1508]: time="2025-05-13T23:56:05.504304628Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:56:05.504380 containerd[1508]: time="2025-05-13T23:56:05.504367917Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:56:05.504623 containerd[1508]: time="2025-05-13T23:56:05.504605563Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:56:05.504680 containerd[1508]: time="2025-05-13T23:56:05.504667960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:56:05.504752 containerd[1508]: time="2025-05-13T23:56:05.504739815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:56:05.504877 containerd[1508]: time="2025-05-13T23:56:05.504860652Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:56:05.504925 containerd[1508]: time="2025-05-13T23:56:05.504914342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:56:05.505283 containerd[1508]: time="2025-05-13T23:56:05.505263988Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:56:05.505348 containerd[1508]: time="2025-05-13T23:56:05.505335603Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:56:05.505395 containerd[1508]: time="2025-05-13T23:56:05.505383413Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:56:05.505436 containerd[1508]: time="2025-05-13T23:56:05.505425762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:56:05.505584 containerd[1508]: time="2025-05-13T23:56:05.505567368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:56:05.505911 containerd[1508]: time="2025-05-13T23:56:05.505893440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:56:05.505994 containerd[1508]: time="2025-05-13T23:56:05.505978790Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:56:05.506042 containerd[1508]: time="2025-05-13T23:56:05.506030988Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:56:05.506130 containerd[1508]: time="2025-05-13T23:56:05.506116198Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:56:05.506509 containerd[1508]: time="2025-05-13T23:56:05.506478958Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:56:05.506631 containerd[1508]: time="2025-05-13T23:56:05.506617178Z" level=info msg="metadata content store policy set" policy=shared May 13 23:56:05.515830 containerd[1508]: time="2025-05-13T23:56:05.515802303Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:56:05.516015 containerd[1508]: time="2025-05-13T23:56:05.515922669Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:56:05.516146 containerd[1508]: time="2025-05-13T23:56:05.516050158Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:56:05.516146 containerd[1508]: time="2025-05-13T23:56:05.516076908Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:56:05.516146 containerd[1508]: time="2025-05-13T23:56:05.516131831Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:56:05.516146 containerd[1508]: time="2025-05-13T23:56:05.516142662Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516155936Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516169592Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516181274Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516192275Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516203456Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:56:05.516238 containerd[1508]: time="2025-05-13T23:56:05.516216420Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:56:05.516409 containerd[1508]: time="2025-05-13T23:56:05.516377993Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:56:05.516409 containerd[1508]: time="2025-05-13T23:56:05.516403671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:56:05.516453 containerd[1508]: time="2025-05-13T23:56:05.516416946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:56:05.516453 containerd[1508]: time="2025-05-13T23:56:05.516428618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:56:05.516490 containerd[1508]: time="2025-05-13T23:56:05.516455899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:56:05.516490 containerd[1508]: time="2025-05-13T23:56:05.516471328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:56:05.516490 containerd[1508]: time="2025-05-13T23:56:05.516488520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:56:05.516560 containerd[1508]: time="2025-05-13T23:56:05.516499892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:56:05.516560 containerd[1508]: time="2025-05-13T23:56:05.516512335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:56:05.516560 containerd[1508]: time="2025-05-13T23:56:05.516523676Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:56:05.516560 containerd[1508]: time="2025-05-13T23:56:05.516534407Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:56:05.516636 containerd[1508]: time="2025-05-13T23:56:05.516605280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:56:05.516636 containerd[1508]: time="2025-05-13T23:56:05.516618294Z" level=info msg="Start snapshots syncer" May 13 23:56:05.516675 containerd[1508]: time="2025-05-13T23:56:05.516651346Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:56:05.517250 containerd[1508]: time="2025-05-13T23:56:05.516900463Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:56:05.517250 containerd[1508]: time="2025-05-13T23:56:05.516967479Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517033824Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517171222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517203712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517215495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517225363Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517255650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517267021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517277511Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517304912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517320231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517330460Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517372539Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517389962Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:56:05.517391 containerd[1508]: time="2025-05-13T23:56:05.517400041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517412264Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517421371Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517432432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517443803Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517456667Z" level=info msg="runtime interface created" May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517462298Z" level=info msg="created NRI interface" May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517472517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517483878Z" level=info msg="Connect containerd service" May 13 23:56:05.517645 containerd[1508]: time="2025-05-13T23:56:05.517514055Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:56:05.519981 containerd[1508]: time="2025-05-13T23:56:05.519931621Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:56:05.619148 containerd[1508]: time="2025-05-13T23:56:05.619071951Z" level=info msg="Start subscribing containerd event" May 13 23:56:05.619148 containerd[1508]: time="2025-05-13T23:56:05.619150398Z" level=info msg="Start recovering state" May 13 23:56:05.619297 containerd[1508]: time="2025-05-13T23:56:05.619259052Z" level=info msg="Start event monitor" May 13 23:56:05.619297 containerd[1508]: time="2025-05-13T23:56:05.619272758Z" level=info msg="Start cni network conf syncer for default" May 13 23:56:05.619297 containerd[1508]: time="2025-05-13T23:56:05.619281033Z" level=info msg="Start streaming server" May 13 23:56:05.619297 containerd[1508]: time="2025-05-13T23:56:05.619297544Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:56:05.619419 containerd[1508]: time="2025-05-13T23:56:05.619307002Z" level=info msg="runtime interface starting up..." May 13 23:56:05.619419 containerd[1508]: time="2025-05-13T23:56:05.619314967Z" level=info msg="starting plugins..." May 13 23:56:05.619419 containerd[1508]: time="2025-05-13T23:56:05.619322932Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:56:05.619419 containerd[1508]: time="2025-05-13T23:56:05.619417249Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:56:05.619520 containerd[1508]: time="2025-05-13T23:56:05.619330737Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:56:05.619724 containerd[1508]: time="2025-05-13T23:56:05.619593470Z" level=info msg="containerd successfully booted in 0.127980s" May 13 23:56:05.619777 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:56:05.694616 tar[1498]: linux-amd64/LICENSE May 13 23:56:05.694760 tar[1498]: linux-amd64/README.md May 13 23:56:05.716802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:56:06.735351 systemd-networkd[1441]: eth0: Gained IPv6LL May 13 23:56:06.739228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:56:06.741961 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:56:06.745411 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:56:06.748434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:06.751276 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:56:06.785914 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:56:06.808459 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:56:06.808847 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:56:06.810880 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:56:07.467119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:07.469154 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:56:07.470599 systemd[1]: Startup finished in 814ms (kernel) + 6.333s (initrd) + 5.104s (userspace) = 12.252s. May 13 23:56:07.480665 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:56:07.903521 kubelet[1602]: E0513 23:56:07.903404 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:56:07.907863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:56:07.908094 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:56:07.908472 systemd[1]: kubelet.service: Consumed 987ms CPU time, 238.1M memory peak. May 13 23:56:09.624958 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:56:09.626577 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:43774.service - OpenSSH per-connection server daemon (10.0.0.1:43774). May 13 23:56:09.692093 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43774 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:09.694207 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:09.705754 systemd-logind[1488]: New session 1 of user core. May 13 23:56:09.707047 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:56:09.708341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:56:09.735303 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:56:09.738134 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:56:09.751654 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:56:09.754165 systemd-logind[1488]: New session c1 of user core. May 13 23:56:09.926298 systemd[1619]: Queued start job for default target default.target. May 13 23:56:09.938555 systemd[1619]: Created slice app.slice - User Application Slice. May 13 23:56:09.938584 systemd[1619]: Reached target paths.target - Paths. May 13 23:56:09.938638 systemd[1619]: Reached target timers.target - Timers. May 13 23:56:09.940312 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:56:09.951756 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:56:09.951883 systemd[1619]: Reached target sockets.target - Sockets. May 13 23:56:09.951926 systemd[1619]: Reached target basic.target - Basic System. May 13 23:56:09.951969 systemd[1619]: Reached target default.target - Main User Target. May 13 23:56:09.952005 systemd[1619]: Startup finished in 190ms. May 13 23:56:09.952324 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:56:09.954010 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:56:10.022685 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:43784.service - OpenSSH per-connection server daemon (10.0.0.1:43784). May 13 23:56:10.076194 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 43784 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.078289 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.083038 systemd-logind[1488]: New session 2 of user core. May 13 23:56:10.092225 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:56:10.146225 sshd[1632]: Connection closed by 10.0.0.1 port 43784 May 13 23:56:10.146546 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 13 23:56:10.156907 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:43784.service: Deactivated successfully. May 13 23:56:10.158641 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:56:10.160054 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. May 13 23:56:10.161376 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:43788.service - OpenSSH per-connection server daemon (10.0.0.1:43788). May 13 23:56:10.162254 systemd-logind[1488]: Removed session 2. May 13 23:56:10.212465 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 43788 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.214273 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.218906 systemd-logind[1488]: New session 3 of user core. May 13 23:56:10.230258 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:56:10.279694 sshd[1640]: Connection closed by 10.0.0.1 port 43788 May 13 23:56:10.280316 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 13 23:56:10.288979 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:43788.service: Deactivated successfully. May 13 23:56:10.290707 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:56:10.292421 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. May 13 23:56:10.293847 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). May 13 23:56:10.294718 systemd-logind[1488]: Removed session 3. May 13 23:56:10.345791 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.348366 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.362058 systemd-logind[1488]: New session 4 of user core. May 13 23:56:10.375417 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:56:10.447273 sshd[1648]: Connection closed by 10.0.0.1 port 43804 May 13 23:56:10.449010 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 13 23:56:10.461887 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:43810.service - OpenSSH per-connection server daemon (10.0.0.1:43810). May 13 23:56:10.462948 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:43804.service: Deactivated successfully. May 13 23:56:10.465444 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:56:10.468394 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. May 13 23:56:10.475912 systemd-logind[1488]: Removed session 4. May 13 23:56:10.521525 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 43810 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.523316 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.528710 systemd-logind[1488]: New session 5 of user core. May 13 23:56:10.540361 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:56:10.613310 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:56:10.613842 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:56:10.642386 sudo[1657]: pam_unix(sudo:session): session closed for user root May 13 23:56:10.644473 sshd[1656]: Connection closed by 10.0.0.1 port 43810 May 13 23:56:10.644949 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 13 23:56:10.666645 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:43810.service: Deactivated successfully. May 13 23:56:10.669173 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:56:10.671698 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. May 13 23:56:10.673762 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:43814.service - OpenSSH per-connection server daemon (10.0.0.1:43814). May 13 23:56:10.674954 systemd-logind[1488]: Removed session 5. May 13 23:56:10.744761 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 43814 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.747267 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.752532 systemd-logind[1488]: New session 6 of user core. May 13 23:56:10.766361 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:56:10.824425 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:56:10.824885 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:56:10.830477 sudo[1667]: pam_unix(sudo:session): session closed for user root May 13 23:56:10.839311 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:56:10.839804 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:56:10.852233 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:56:10.905168 augenrules[1689]: No rules May 13 23:56:10.907583 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:56:10.907995 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:56:10.909628 sudo[1666]: pam_unix(sudo:session): session closed for user root May 13 23:56:10.911402 sshd[1665]: Connection closed by 10.0.0.1 port 43814 May 13 23:56:10.911828 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 13 23:56:10.921618 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:43814.service: Deactivated successfully. May 13 23:56:10.923822 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:56:10.925970 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. May 13 23:56:10.927400 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:43816.service - OpenSSH per-connection server daemon (10.0.0.1:43816). May 13 23:56:10.928501 systemd-logind[1488]: Removed session 6. May 13 23:56:10.986806 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 43816 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:56:10.988799 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:10.993638 systemd-logind[1488]: New session 7 of user core. May 13 23:56:11.004244 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:56:11.062459 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:56:11.062892 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:56:11.404232 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:56:11.418691 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:56:11.732828 dockerd[1722]: time="2025-05-13T23:56:11.732645924Z" level=info msg="Starting up" May 13 23:56:11.735392 dockerd[1722]: time="2025-05-13T23:56:11.735341402Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:56:12.386002 dockerd[1722]: time="2025-05-13T23:56:12.385944953Z" level=info msg="Loading containers: start." May 13 23:56:12.577122 kernel: Initializing XFRM netlink socket May 13 23:56:12.663726 systemd-networkd[1441]: docker0: Link UP May 13 23:56:12.741896 dockerd[1722]: time="2025-05-13T23:56:12.741823032Z" level=info msg="Loading containers: done." May 13 23:56:12.763425 dockerd[1722]: time="2025-05-13T23:56:12.763346540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:56:12.763707 dockerd[1722]: time="2025-05-13T23:56:12.763468408Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:56:12.763707 dockerd[1722]: time="2025-05-13T23:56:12.763646082Z" level=info msg="Daemon has completed initialization" May 13 23:56:12.813305 dockerd[1722]: time="2025-05-13T23:56:12.813202421Z" level=info msg="API listen on /run/docker.sock" May 13 23:56:12.813474 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:56:13.952602 containerd[1508]: time="2025-05-13T23:56:13.952539014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:56:14.648118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496607774.mount: Deactivated successfully. May 13 23:56:16.302801 containerd[1508]: time="2025-05-13T23:56:16.302725871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:16.303698 containerd[1508]: time="2025-05-13T23:56:16.303616903Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 23:56:16.304889 containerd[1508]: time="2025-05-13T23:56:16.304852271Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:16.308342 containerd[1508]: time="2025-05-13T23:56:16.308299860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:16.309670 containerd[1508]: time="2025-05-13T23:56:16.309594600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.357008327s" May 13 23:56:16.309670 containerd[1508]: time="2025-05-13T23:56:16.309667206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 23:56:16.311484 containerd[1508]: time="2025-05-13T23:56:16.311440995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:56:17.819882 containerd[1508]: time="2025-05-13T23:56:17.819734469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:17.827489 containerd[1508]: time="2025-05-13T23:56:17.827413358Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 23:56:17.833071 containerd[1508]: time="2025-05-13T23:56:17.832989932Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:17.841337 containerd[1508]: time="2025-05-13T23:56:17.841224093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:17.842387 containerd[1508]: time="2025-05-13T23:56:17.842322243Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.530834341s" May 13 23:56:17.842387 containerd[1508]: time="2025-05-13T23:56:17.842380944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 23:56:17.843070 containerd[1508]: time="2025-05-13T23:56:17.843026144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:56:18.158750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:56:18.161018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:18.477239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:18.494587 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:56:18.822873 kubelet[1998]: E0513 23:56:18.822673 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:56:18.829773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:56:18.830006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:56:18.830460 systemd[1]: kubelet.service: Consumed 634ms CPU time, 97.6M memory peak. May 13 23:56:21.261816 containerd[1508]: time="2025-05-13T23:56:21.261755815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:21.264262 containerd[1508]: time="2025-05-13T23:56:21.264208778Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 23:56:21.265860 containerd[1508]: time="2025-05-13T23:56:21.265813990Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:21.276654 containerd[1508]: time="2025-05-13T23:56:21.276608796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:21.277696 containerd[1508]: time="2025-05-13T23:56:21.277623019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 3.434548093s" May 13 23:56:21.277696 containerd[1508]: time="2025-05-13T23:56:21.277678563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 23:56:21.278312 containerd[1508]: time="2025-05-13T23:56:21.278280282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:56:23.125286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246045479.mount: Deactivated successfully. May 13 23:56:24.277931 containerd[1508]: time="2025-05-13T23:56:24.277815234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:24.284225 containerd[1508]: time="2025-05-13T23:56:24.284124433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 23:56:24.286549 containerd[1508]: time="2025-05-13T23:56:24.286466919Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:24.289452 containerd[1508]: time="2025-05-13T23:56:24.289342995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:24.290329 containerd[1508]: time="2025-05-13T23:56:24.290256189Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.011942183s" May 13 23:56:24.290329 containerd[1508]: time="2025-05-13T23:56:24.290313456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 23:56:24.290930 containerd[1508]: time="2025-05-13T23:56:24.290889317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:56:24.874281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712887394.mount: Deactivated successfully. May 13 23:56:25.683506 containerd[1508]: time="2025-05-13T23:56:25.683416354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:25.685365 containerd[1508]: time="2025-05-13T23:56:25.685248101Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:56:25.686918 containerd[1508]: time="2025-05-13T23:56:25.686856028Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:25.690603 containerd[1508]: time="2025-05-13T23:56:25.690544379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:25.692260 containerd[1508]: time="2025-05-13T23:56:25.692186240Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.401252861s" May 13 23:56:25.692260 containerd[1508]: time="2025-05-13T23:56:25.692252204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:56:25.693053 containerd[1508]: time="2025-05-13T23:56:25.693016678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:56:27.057760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063228210.mount: Deactivated successfully. May 13 23:56:27.070295 containerd[1508]: time="2025-05-13T23:56:27.070204152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:56:27.072255 containerd[1508]: time="2025-05-13T23:56:27.072157056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:56:27.075942 containerd[1508]: time="2025-05-13T23:56:27.075855546Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:56:27.082951 containerd[1508]: time="2025-05-13T23:56:27.082837287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:56:27.083419 containerd[1508]: time="2025-05-13T23:56:27.083363194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.390308795s" May 13 23:56:27.083419 containerd[1508]: time="2025-05-13T23:56:27.083408258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:56:27.084948 containerd[1508]: time="2025-05-13T23:56:27.084012532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:56:27.957866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392164054.mount: Deactivated successfully. May 13 23:56:29.080742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:56:29.083117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:29.388917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:29.407693 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:56:29.840755 kubelet[2083]: E0513 23:56:29.840584 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:56:29.844565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:56:29.844779 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:56:29.845422 systemd[1]: kubelet.service: Consumed 233ms CPU time, 95.9M memory peak. May 13 23:56:31.686696 containerd[1508]: time="2025-05-13T23:56:31.686606630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:31.691165 containerd[1508]: time="2025-05-13T23:56:31.690550090Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 23:56:31.693611 containerd[1508]: time="2025-05-13T23:56:31.693477643Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:31.699261 containerd[1508]: time="2025-05-13T23:56:31.699129809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:31.700797 containerd[1508]: time="2025-05-13T23:56:31.700672013Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.615665966s" May 13 23:56:31.700797 containerd[1508]: time="2025-05-13T23:56:31.700743847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 23:56:34.553295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:34.553575 systemd[1]: kubelet.service: Consumed 233ms CPU time, 95.9M memory peak. May 13 23:56:34.558848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:34.602792 systemd[1]: Reload requested from client PID 2168 ('systemctl') (unit session-7.scope)... May 13 23:56:34.602816 systemd[1]: Reloading... May 13 23:56:34.766288 zram_generator::config[2218]: No configuration found. May 13 23:56:35.190747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:35.321859 systemd[1]: Reloading finished in 718 ms. May 13 23:56:35.395272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:56:35.395395 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:56:35.395755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:35.395804 systemd[1]: kubelet.service: Consumed 185ms CPU time, 83.6M memory peak. May 13 23:56:35.400283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:35.649188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:35.673234 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:56:35.774782 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:56:35.774782 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:56:35.774782 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:56:35.775326 kubelet[2261]: I0513 23:56:35.774844 2261 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:56:36.459588 kubelet[2261]: I0513 23:56:36.459509 2261 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:56:36.459777 kubelet[2261]: I0513 23:56:36.459707 2261 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:56:36.460871 kubelet[2261]: I0513 23:56:36.460830 2261 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:56:36.509114 kubelet[2261]: I0513 23:56:36.508817 2261 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:56:36.509517 kubelet[2261]: E0513 23:56:36.509469 2261 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:36.526643 kubelet[2261]: I0513 23:56:36.526588 2261 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:56:36.539958 kubelet[2261]: I0513 23:56:36.539891 2261 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:56:36.541536 kubelet[2261]: I0513 23:56:36.541471 2261 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:56:36.541720 kubelet[2261]: I0513 23:56:36.541671 2261 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:56:36.541949 kubelet[2261]: I0513 23:56:36.541701 2261 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:56:36.541949 kubelet[2261]: I0513 23:56:36.541929 2261 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:56:36.541949 kubelet[2261]: I0513 23:56:36.541942 2261 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:56:36.542216 kubelet[2261]: I0513 23:56:36.542135 2261 state_mem.go:36] "Initialized new in-memory state store" May 13 23:56:36.558001 kubelet[2261]: I0513 23:56:36.552469 2261 kubelet.go:408] "Attempting to sync node with API server" May 13 23:56:36.558001 kubelet[2261]: I0513 23:56:36.552535 2261 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:56:36.558001 kubelet[2261]: I0513 23:56:36.552593 2261 kubelet.go:314] "Adding apiserver pod source" May 13 23:56:36.558001 kubelet[2261]: I0513 23:56:36.552627 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:56:36.561389 kubelet[2261]: W0513 23:56:36.561224 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:36.561389 kubelet[2261]: E0513 23:56:36.561321 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:36.566626 kubelet[2261]: W0513 23:56:36.563478 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:36.566626 kubelet[2261]: E0513 23:56:36.563565 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:36.572842 kubelet[2261]: I0513 23:56:36.572783 2261 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:56:36.575232 kubelet[2261]: I0513 23:56:36.575174 2261 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:56:36.575949 kubelet[2261]: W0513 23:56:36.575902 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:56:36.576864 kubelet[2261]: I0513 23:56:36.576828 2261 server.go:1269] "Started kubelet" May 13 23:56:36.576956 kubelet[2261]: I0513 23:56:36.576903 2261 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:56:36.579532 kubelet[2261]: I0513 23:56:36.578357 2261 server.go:460] "Adding debug handlers to kubelet server" May 13 23:56:36.579532 kubelet[2261]: I0513 23:56:36.578433 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:56:36.579532 kubelet[2261]: I0513 23:56:36.578860 2261 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:56:36.579944 kubelet[2261]: I0513 23:56:36.579919 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:56:36.580770 kubelet[2261]: I0513 23:56:36.580029 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:56:36.581519 kubelet[2261]: E0513 23:56:36.581469 2261 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:56:36.581581 kubelet[2261]: I0513 23:56:36.581535 2261 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:56:36.582314 kubelet[2261]: I0513 23:56:36.582056 2261 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:56:36.582314 kubelet[2261]: I0513 23:56:36.582190 2261 reconciler.go:26] "Reconciler: start to sync state" May 13 23:56:36.582854 kubelet[2261]: W0513 23:56:36.582664 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:36.582854 kubelet[2261]: E0513 23:56:36.582749 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:36.584807 kubelet[2261]: E0513 23:56:36.583810 2261 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:56:36.584807 kubelet[2261]: I0513 23:56:36.583955 2261 factory.go:221] Registration of the systemd container factory successfully May 13 23:56:36.584807 kubelet[2261]: I0513 23:56:36.584180 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:56:36.584807 kubelet[2261]: E0513 23:56:36.584360 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" May 13 23:56:36.588655 kubelet[2261]: I0513 23:56:36.588614 2261 factory.go:221] Registration of the containerd container factory successfully May 13 23:56:36.592362 kubelet[2261]: E0513 23:56:36.588153 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b7ee16a385d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:56:36.576794717 +0000 UTC m=+0.896012404,LastTimestamp:2025-05-13 23:56:36.576794717 +0000 UTC m=+0.896012404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:56:36.614990 kubelet[2261]: I0513 23:56:36.614950 2261 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:56:36.614990 kubelet[2261]: I0513 23:56:36.614975 2261 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:56:36.614990 kubelet[2261]: I0513 23:56:36.615003 2261 state_mem.go:36] "Initialized new in-memory state store" May 13 23:56:36.622132 kubelet[2261]: I0513 23:56:36.621214 2261 policy_none.go:49] "None policy: Start" May 13 23:56:36.622285 kubelet[2261]: I0513 23:56:36.622010 2261 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:56:36.622285 kubelet[2261]: I0513 23:56:36.622256 2261 state_mem.go:35] "Initializing new in-memory state store" May 13 23:56:36.640195 kubelet[2261]: I0513 23:56:36.640118 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:56:36.643040 kubelet[2261]: I0513 23:56:36.643000 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:56:36.643040 kubelet[2261]: I0513 23:56:36.643043 2261 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:56:36.643192 kubelet[2261]: I0513 23:56:36.643096 2261 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:56:36.643192 kubelet[2261]: E0513 23:56:36.643149 2261 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:56:36.644212 kubelet[2261]: W0513 23:56:36.643982 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:36.644212 kubelet[2261]: E0513 23:56:36.644048 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:36.658202 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:56:36.671967 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:56:36.680191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:56:36.684237 kubelet[2261]: E0513 23:56:36.681593 2261 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:56:36.694698 kubelet[2261]: I0513 23:56:36.694645 2261 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:56:36.694961 kubelet[2261]: I0513 23:56:36.694935 2261 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:56:36.695012 kubelet[2261]: I0513 23:56:36.694954 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:56:36.695750 kubelet[2261]: I0513 23:56:36.695731 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:56:36.696645 kubelet[2261]: E0513 23:56:36.696599 2261 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:56:36.783879 systemd[1]: Created slice kubepods-burstable-podf55411009f5a2c040023cb584c788032.slice - libcontainer container kubepods-burstable-podf55411009f5a2c040023cb584c788032.slice. May 13 23:56:36.784909 kubelet[2261]: E0513 23:56:36.784871 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" May 13 23:56:36.797798 kubelet[2261]: I0513 23:56:36.797735 2261 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:56:36.798219 kubelet[2261]: E0513 23:56:36.798179 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" May 13 23:56:36.816921 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 23:56:36.834456 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 23:56:36.883956 kubelet[2261]: I0513 23:56:36.883863 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:36.883956 kubelet[2261]: I0513 23:56:36.883923 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:56:36.883956 kubelet[2261]: I0513 23:56:36.883945 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:36.883956 kubelet[2261]: I0513 23:56:36.883966 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:36.884288 kubelet[2261]: I0513 23:56:36.883996 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:36.884288 kubelet[2261]: I0513 23:56:36.884021 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:36.884288 kubelet[2261]: I0513 23:56:36.884042 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:36.884288 kubelet[2261]: I0513 23:56:36.884073 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:36.884288 kubelet[2261]: I0513 23:56:36.884126 2261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:37.002773 kubelet[2261]: I0513 23:56:37.002196 2261 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:56:37.003165 kubelet[2261]: E0513 23:56:37.003101 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" May 13 23:56:37.109250 containerd[1508]: time="2025-05-13T23:56:37.108603959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f55411009f5a2c040023cb584c788032,Namespace:kube-system,Attempt:0,}" May 13 23:56:37.126315 containerd[1508]: time="2025-05-13T23:56:37.125751696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 23:56:37.138335 containerd[1508]: time="2025-05-13T23:56:37.138163285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 23:56:37.186072 kubelet[2261]: E0513 23:56:37.185977 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" May 13 23:56:37.415343 kubelet[2261]: I0513 23:56:37.410668 2261 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:56:37.415343 kubelet[2261]: E0513 23:56:37.412947 2261 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" May 13 23:56:37.449057 containerd[1508]: time="2025-05-13T23:56:37.448906011Z" level=info msg="connecting to shim 3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7" address="unix:///run/containerd/s/c56e0da14a89db2715f8c993e80eddea42610cec9b500a3cdd53d430690bf7c2" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:37.489748 containerd[1508]: time="2025-05-13T23:56:37.488868278Z" level=info msg="connecting to shim c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed" address="unix:///run/containerd/s/a34acf7757bd4707928ee377627f8b6398092c2e30fbb143f601a12d06ba8527" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:37.493174 containerd[1508]: time="2025-05-13T23:56:37.492419632Z" level=info msg="connecting to shim 35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee" address="unix:///run/containerd/s/7190b1925a4283292669005b6f35ee586e7396b0f7af1cd3c056dc06fe24359c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:37.533860 kubelet[2261]: W0513 23:56:37.533755 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:37.533860 kubelet[2261]: E0513 23:56:37.533831 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:37.536468 systemd[1]: Started cri-containerd-3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7.scope - libcontainer container 3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7. May 13 23:56:37.548912 systemd[1]: Started cri-containerd-35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee.scope - libcontainer container 35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee. May 13 23:56:37.556604 systemd[1]: Started cri-containerd-c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed.scope - libcontainer container c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed. May 13 23:56:37.622669 containerd[1508]: time="2025-05-13T23:56:37.622617138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f55411009f5a2c040023cb584c788032,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7\"" May 13 23:56:37.630143 containerd[1508]: time="2025-05-13T23:56:37.629904993Z" level=info msg="CreateContainer within sandbox \"3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:56:37.659988 containerd[1508]: time="2025-05-13T23:56:37.659935854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee\"" May 13 23:56:37.665177 containerd[1508]: time="2025-05-13T23:56:37.663535438Z" level=info msg="CreateContainer within sandbox \"35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:56:37.669935 containerd[1508]: time="2025-05-13T23:56:37.669320383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed\"" May 13 23:56:37.673733 containerd[1508]: time="2025-05-13T23:56:37.673684973Z" level=info msg="CreateContainer within sandbox \"c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:56:37.687872 containerd[1508]: time="2025-05-13T23:56:37.687071111Z" level=info msg="Container 84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382: CDI devices from CRI Config.CDIDevices: []" May 13 23:56:37.730030 containerd[1508]: time="2025-05-13T23:56:37.726626514Z" level=info msg="Container f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0: CDI devices from CRI Config.CDIDevices: []" May 13 23:56:37.739274 containerd[1508]: time="2025-05-13T23:56:37.735990635Z" level=info msg="Container 97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb: CDI devices from CRI Config.CDIDevices: []" May 13 23:56:37.739274 containerd[1508]: time="2025-05-13T23:56:37.737470171Z" level=info msg="CreateContainer within sandbox \"3c623510cfacb7cd8be7e9f14c5708384af0baf16819f3984aff6c8ad79348e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382\"" May 13 23:56:37.745150 containerd[1508]: time="2025-05-13T23:56:37.745040196Z" level=info msg="StartContainer for \"84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382\"" May 13 23:56:37.746882 containerd[1508]: time="2025-05-13T23:56:37.746820286Z" level=info msg="connecting to shim 84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382" address="unix:///run/containerd/s/c56e0da14a89db2715f8c993e80eddea42610cec9b500a3cdd53d430690bf7c2" protocol=ttrpc version=3 May 13 23:56:37.768990 containerd[1508]: time="2025-05-13T23:56:37.768802765Z" level=info msg="CreateContainer within sandbox \"c55a97b2a95affc6d36818a500d4ba32e4b125a478cfa98840753bb30a004eed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0\"" May 13 23:56:37.769695 containerd[1508]: time="2025-05-13T23:56:37.769475046Z" level=info msg="StartContainer for \"f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0\"" May 13 23:56:37.775474 containerd[1508]: time="2025-05-13T23:56:37.772678778Z" level=info msg="connecting to shim f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0" address="unix:///run/containerd/s/a34acf7757bd4707928ee377627f8b6398092c2e30fbb143f601a12d06ba8527" protocol=ttrpc version=3 May 13 23:56:37.789344 containerd[1508]: time="2025-05-13T23:56:37.789288555Z" level=info msg="CreateContainer within sandbox \"35fb17e052d856567cf8add9696642c74d3dad9c15b70701f1e25905f97e12ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb\"" May 13 23:56:37.790489 containerd[1508]: time="2025-05-13T23:56:37.790433313Z" level=info msg="StartContainer for \"97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb\"" May 13 23:56:37.793489 containerd[1508]: time="2025-05-13T23:56:37.792051299Z" level=info msg="connecting to shim 97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb" address="unix:///run/containerd/s/7190b1925a4283292669005b6f35ee586e7396b0f7af1cd3c056dc06fe24359c" protocol=ttrpc version=3 May 13 23:56:37.796907 systemd[1]: Started cri-containerd-84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382.scope - libcontainer container 84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382. May 13 23:56:37.826482 systemd[1]: Started cri-containerd-f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0.scope - libcontainer container f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0. May 13 23:56:37.841602 systemd[1]: Started cri-containerd-97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb.scope - libcontainer container 97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb. May 13 23:56:37.860331 kubelet[2261]: W0513 23:56:37.859916 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:37.860331 kubelet[2261]: E0513 23:56:37.859976 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:37.949462 containerd[1508]: time="2025-05-13T23:56:37.949257655Z" level=info msg="StartContainer for \"f725328f295936080ed708b7aabddcb8805890f1de4bee97a89fba4c1db8aea0\" returns successfully" May 13 23:56:37.954293 containerd[1508]: time="2025-05-13T23:56:37.954228683Z" level=info msg="StartContainer for \"84c3211ee89d2a0615e4cf43cda852ec91fa61d67ee026699c06ab9299104382\" returns successfully" May 13 23:56:37.965221 containerd[1508]: time="2025-05-13T23:56:37.964453890Z" level=info msg="StartContainer for \"97eb9ca1eeec515d17b2188530c01e5dc355e2ce765768ffa41b9f7ef44386eb\" returns successfully" May 13 23:56:37.966644 kubelet[2261]: W0513 23:56:37.966294 2261 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused May 13 23:56:37.966644 kubelet[2261]: E0513 23:56:37.966436 2261 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" May 13 23:56:37.987252 kubelet[2261]: E0513 23:56:37.987167 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" May 13 23:56:38.219889 kubelet[2261]: I0513 23:56:38.218918 2261 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:56:39.562408 kubelet[2261]: I0513 23:56:39.562300 2261 apiserver.go:52] "Watching apiserver" May 13 23:56:39.585888 kubelet[2261]: I0513 23:56:39.585254 2261 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:56:39.647405 kubelet[2261]: I0513 23:56:39.647359 2261 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:56:39.689546 kubelet[2261]: E0513 23:56:39.689488 2261 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:56:43.139756 systemd[1]: Reload requested from client PID 2530 ('systemctl') (unit session-7.scope)... May 13 23:56:43.139780 systemd[1]: Reloading... May 13 23:56:43.301134 zram_generator::config[2577]: No configuration found. May 13 23:56:43.540175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:43.756340 systemd[1]: Reloading finished in 615 ms. May 13 23:56:43.795939 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:43.817043 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:56:43.817514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:43.817851 systemd[1]: kubelet.service: Consumed 1.266s CPU time, 123.1M memory peak. May 13 23:56:43.825468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:44.088317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:44.108680 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:56:44.212915 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:56:44.212915 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:56:44.212915 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:56:44.212915 kubelet[2618]: I0513 23:56:44.212378 2618 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:56:44.223755 kubelet[2618]: I0513 23:56:44.223673 2618 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:56:44.223755 kubelet[2618]: I0513 23:56:44.223721 2618 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:56:44.226170 kubelet[2618]: I0513 23:56:44.224033 2618 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:56:44.226170 kubelet[2618]: I0513 23:56:44.225774 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:56:44.236258 kubelet[2618]: I0513 23:56:44.234915 2618 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:56:44.246988 kubelet[2618]: I0513 23:56:44.246910 2618 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:56:44.254049 kubelet[2618]: I0513 23:56:44.253988 2618 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:56:44.254432 kubelet[2618]: I0513 23:56:44.254211 2618 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:56:44.254470 kubelet[2618]: I0513 23:56:44.254428 2618 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:56:44.255965 kubelet[2618]: I0513 23:56:44.254464 2618 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:56:44.255965 kubelet[2618]: I0513 23:56:44.254703 2618 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:56:44.255965 kubelet[2618]: I0513 23:56:44.254715 2618 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:56:44.255965 kubelet[2618]: I0513 23:56:44.254752 2618 state_mem.go:36] "Initialized new in-memory state store" May 13 23:56:44.255965 kubelet[2618]: I0513 23:56:44.254901 2618 kubelet.go:408] "Attempting to sync node with API server" May 13 23:56:44.256490 kubelet[2618]: I0513 23:56:44.254919 2618 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:56:44.257267 kubelet[2618]: I0513 23:56:44.257217 2618 kubelet.go:314] "Adding apiserver pod source" May 13 23:56:44.257267 kubelet[2618]: I0513 23:56:44.257291 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:56:44.262215 kubelet[2618]: I0513 23:56:44.261059 2618 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:56:44.262215 kubelet[2618]: I0513 23:56:44.261714 2618 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:56:44.262422 kubelet[2618]: I0513 23:56:44.262333 2618 server.go:1269] "Started kubelet" May 13 23:56:44.262553 kubelet[2618]: I0513 23:56:44.262494 2618 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:56:44.262721 kubelet[2618]: I0513 23:56:44.262651 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:56:44.263010 kubelet[2618]: I0513 23:56:44.262978 2618 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:56:44.265531 kubelet[2618]: I0513 23:56:44.263594 2618 server.go:460] "Adding debug handlers to kubelet server" May 13 23:56:44.266309 kubelet[2618]: I0513 23:56:44.265974 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:56:44.275554 kubelet[2618]: I0513 23:56:44.275390 2618 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:56:44.275895 kubelet[2618]: I0513 23:56:44.275638 2618 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:56:44.275895 kubelet[2618]: I0513 23:56:44.275798 2618 reconciler.go:26] "Reconciler: start to sync state" May 13 23:56:44.276110 kubelet[2618]: I0513 23:56:44.269328 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:56:44.279075 kubelet[2618]: E0513 23:56:44.276858 2618 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:56:44.280070 kubelet[2618]: I0513 23:56:44.280009 2618 factory.go:221] Registration of the systemd container factory successfully May 13 23:56:44.280349 kubelet[2618]: I0513 23:56:44.280287 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:56:44.284596 kubelet[2618]: I0513 23:56:44.284556 2618 factory.go:221] Registration of the containerd container factory successfully May 13 23:56:44.301467 kubelet[2618]: I0513 23:56:44.301231 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:56:44.303594 kubelet[2618]: I0513 23:56:44.303311 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:56:44.303594 kubelet[2618]: I0513 23:56:44.303379 2618 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:56:44.303594 kubelet[2618]: I0513 23:56:44.303404 2618 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:56:44.303594 kubelet[2618]: E0513 23:56:44.303470 2618 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344016 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344042 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344067 2618 state_mem.go:36] "Initialized new in-memory state store" May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344350 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344366 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:56:44.345433 kubelet[2618]: I0513 23:56:44.344393 2618 policy_none.go:49] "None policy: Start" May 13 23:56:44.348658 kubelet[2618]: I0513 23:56:44.348595 2618 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:56:44.348658 kubelet[2618]: I0513 23:56:44.348654 2618 state_mem.go:35] "Initializing new in-memory state store" May 13 23:56:44.349040 kubelet[2618]: I0513 23:56:44.349013 2618 state_mem.go:75] "Updated machine memory state" May 13 23:56:44.359467 kubelet[2618]: I0513 23:56:44.357809 2618 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:56:44.359467 kubelet[2618]: I0513 23:56:44.358065 2618 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:56:44.359467 kubelet[2618]: I0513 23:56:44.358096 2618 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:56:44.359467 kubelet[2618]: I0513 23:56:44.359272 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:56:44.418600 kubelet[2618]: E0513 23:56:44.418486 2618 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:56:44.421502 kubelet[2618]: E0513 23:56:44.421262 2618 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:56:44.464069 kubelet[2618]: I0513 23:56:44.464021 2618 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 23:56:44.479224 kubelet[2618]: I0513 23:56:44.479142 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:44.479224 kubelet[2618]: I0513 23:56:44.479191 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:44.479224 kubelet[2618]: I0513 23:56:44.479219 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:44.479224 kubelet[2618]: I0513 23:56:44.479241 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:44.479566 kubelet[2618]: I0513 23:56:44.479276 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 23:56:44.479566 kubelet[2618]: I0513 23:56:44.479296 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:44.479566 kubelet[2618]: I0513 23:56:44.479324 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f55411009f5a2c040023cb584c788032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f55411009f5a2c040023cb584c788032\") " pod="kube-system/kube-apiserver-localhost" May 13 23:56:44.479566 kubelet[2618]: I0513 23:56:44.479348 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:44.479566 kubelet[2618]: I0513 23:56:44.479387 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:56:44.511449 kubelet[2618]: I0513 23:56:44.511229 2618 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 23:56:44.512530 kubelet[2618]: I0513 23:56:44.511702 2618 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 23:56:45.262020 kubelet[2618]: I0513 23:56:45.261574 2618 apiserver.go:52] "Watching apiserver" May 13 23:56:45.277635 kubelet[2618]: I0513 23:56:45.277452 2618 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:56:45.493242 kubelet[2618]: I0513 23:56:45.485691 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.485569361 podStartE2EDuration="5.485569361s" podCreationTimestamp="2025-05-13 23:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:56:45.429039097 +0000 UTC m=+1.311620216" watchObservedRunningTime="2025-05-13 23:56:45.485569361 +0000 UTC m=+1.368150490" May 13 23:56:45.580505 kubelet[2618]: I0513 23:56:45.579105 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5790653460000001 podStartE2EDuration="1.579065346s" podCreationTimestamp="2025-05-13 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:56:45.576487862 +0000 UTC m=+1.459069001" watchObservedRunningTime="2025-05-13 23:56:45.579065346 +0000 UTC m=+1.461646485" May 13 23:56:45.580505 kubelet[2618]: I0513 23:56:45.579268 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.579258565 podStartE2EDuration="5.579258565s" podCreationTimestamp="2025-05-13 23:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:56:45.491767676 +0000 UTC m=+1.374348805" watchObservedRunningTime="2025-05-13 23:56:45.579258565 +0000 UTC m=+1.461839704" May 13 23:56:45.617216 kernel: hrtimer: interrupt took 3111752 ns May 13 23:56:47.408196 kubelet[2618]: I0513 23:56:47.403838 2618 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:56:47.417579 containerd[1508]: time="2025-05-13T23:56:47.409848095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:56:47.422462 kubelet[2618]: I0513 23:56:47.417837 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:56:48.157632 systemd[1]: Created slice kubepods-besteffort-pode0208b5f_7c58_4105_8004_83c8e07c7a57.slice - libcontainer container kubepods-besteffort-pode0208b5f_7c58_4105_8004_83c8e07c7a57.slice. May 13 23:56:48.169721 kubelet[2618]: I0513 23:56:48.168013 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0208b5f-7c58-4105-8004-83c8e07c7a57-xtables-lock\") pod \"kube-proxy-wlbbb\" (UID: \"e0208b5f-7c58-4105-8004-83c8e07c7a57\") " pod="kube-system/kube-proxy-wlbbb" May 13 23:56:48.169721 kubelet[2618]: I0513 23:56:48.168075 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hrb\" (UniqueName: \"kubernetes.io/projected/e0208b5f-7c58-4105-8004-83c8e07c7a57-kube-api-access-78hrb\") pod \"kube-proxy-wlbbb\" (UID: \"e0208b5f-7c58-4105-8004-83c8e07c7a57\") " pod="kube-system/kube-proxy-wlbbb" May 13 23:56:48.169721 kubelet[2618]: I0513 23:56:48.168111 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0208b5f-7c58-4105-8004-83c8e07c7a57-kube-proxy\") pod \"kube-proxy-wlbbb\" (UID: \"e0208b5f-7c58-4105-8004-83c8e07c7a57\") " pod="kube-system/kube-proxy-wlbbb" May 13 23:56:48.169721 kubelet[2618]: I0513 23:56:48.168137 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0208b5f-7c58-4105-8004-83c8e07c7a57-lib-modules\") pod \"kube-proxy-wlbbb\" (UID: \"e0208b5f-7c58-4105-8004-83c8e07c7a57\") " pod="kube-system/kube-proxy-wlbbb" May 13 23:56:48.631817 systemd[1]: Created slice kubepods-besteffort-pod20b63aad_0d5c_44c1_b379_8f57d070b163.slice - libcontainer container kubepods-besteffort-pod20b63aad_0d5c_44c1_b379_8f57d070b163.slice. May 13 23:56:48.672957 kubelet[2618]: I0513 23:56:48.672877 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20b63aad-0d5c-44c1-b379-8f57d070b163-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-pbfsc\" (UID: \"20b63aad-0d5c-44c1-b379-8f57d070b163\") " pod="tigera-operator/tigera-operator-6f6897fdc5-pbfsc" May 13 23:56:48.672957 kubelet[2618]: I0513 23:56:48.672929 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqpkd\" (UniqueName: \"kubernetes.io/projected/20b63aad-0d5c-44c1-b379-8f57d070b163-kube-api-access-wqpkd\") pod \"tigera-operator-6f6897fdc5-pbfsc\" (UID: \"20b63aad-0d5c-44c1-b379-8f57d070b163\") " pod="tigera-operator/tigera-operator-6f6897fdc5-pbfsc" May 13 23:56:48.787244 containerd[1508]: time="2025-05-13T23:56:48.787176314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wlbbb,Uid:e0208b5f-7c58-4105-8004-83c8e07c7a57,Namespace:kube-system,Attempt:0,}" May 13 23:56:48.949001 containerd[1508]: time="2025-05-13T23:56:48.947424687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-pbfsc,Uid:20b63aad-0d5c-44c1-b379-8f57d070b163,Namespace:tigera-operator,Attempt:0,}" May 13 23:56:48.981221 containerd[1508]: time="2025-05-13T23:56:48.981140719Z" level=info msg="connecting to shim bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5" address="unix:///run/containerd/s/51c0fb6c3de0e1e332d5292e65fb9ddd3a14dc939e4c7cd5051efa89349cd12a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:49.053152 containerd[1508]: time="2025-05-13T23:56:49.050524350Z" level=info msg="connecting to shim fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d" address="unix:///run/containerd/s/0b0b68065c4348ba1394349cc19d34254e856480e1d498bd8c8d2be9dc4a0025" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:49.144285 systemd[1]: Started cri-containerd-fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d.scope - libcontainer container fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d. May 13 23:56:49.162807 systemd[1]: Started cri-containerd-bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5.scope - libcontainer container bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5. May 13 23:56:49.227309 containerd[1508]: time="2025-05-13T23:56:49.226506861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wlbbb,Uid:e0208b5f-7c58-4105-8004-83c8e07c7a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5\"" May 13 23:56:49.230442 containerd[1508]: time="2025-05-13T23:56:49.230243268Z" level=info msg="CreateContainer within sandbox \"bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:56:49.282898 containerd[1508]: time="2025-05-13T23:56:49.280524737Z" level=info msg="Container 292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88: CDI devices from CRI Config.CDIDevices: []" May 13 23:56:49.287202 containerd[1508]: time="2025-05-13T23:56:49.285715132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-pbfsc,Uid:20b63aad-0d5c-44c1-b379-8f57d070b163,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d\"" May 13 23:56:49.292780 containerd[1508]: time="2025-05-13T23:56:49.291677363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:56:49.317554 containerd[1508]: time="2025-05-13T23:56:49.317467794Z" level=info msg="CreateContainer within sandbox \"bf6c9642879c9304a80544dc95b0ad1922a48041cad3ebd24aed619e8bcbb9d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88\"" May 13 23:56:49.324118 containerd[1508]: time="2025-05-13T23:56:49.323570932Z" level=info msg="StartContainer for \"292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88\"" May 13 23:56:49.329573 containerd[1508]: time="2025-05-13T23:56:49.325944193Z" level=info msg="connecting to shim 292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88" address="unix:///run/containerd/s/51c0fb6c3de0e1e332d5292e65fb9ddd3a14dc939e4c7cd5051efa89349cd12a" protocol=ttrpc version=3 May 13 23:56:49.395539 systemd[1]: Started cri-containerd-292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88.scope - libcontainer container 292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88. May 13 23:56:49.503427 containerd[1508]: time="2025-05-13T23:56:49.501962512Z" level=info msg="StartContainer for \"292de004891420ff10bd2c2a22a73e2405eacb51fca91d9b67f496fe4cb65d88\" returns successfully" May 13 23:56:49.739498 sudo[1701]: pam_unix(sudo:session): session closed for user root May 13 23:56:49.752487 sshd[1700]: Connection closed by 10.0.0.1 port 43816 May 13 23:56:49.756392 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 13 23:56:49.779750 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:43816.service: Deactivated successfully. May 13 23:56:49.794029 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:56:49.794388 systemd[1]: session-7.scope: Consumed 6.317s CPU time, 221.1M memory peak. May 13 23:56:49.807923 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. May 13 23:56:49.812181 systemd-logind[1488]: Removed session 7. May 13 23:56:50.223582 update_engine[1491]: I20250513 23:56:50.221383 1491 update_attempter.cc:509] Updating boot flags... May 13 23:56:50.271317 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2921) May 13 23:56:51.228992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530871637.mount: Deactivated successfully. May 13 23:56:52.847132 kubelet[2618]: I0513 23:56:52.846369 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wlbbb" podStartSLOduration=4.8463471 podStartE2EDuration="4.8463471s" podCreationTimestamp="2025-05-13 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:56:50.508292216 +0000 UTC m=+6.390873335" watchObservedRunningTime="2025-05-13 23:56:52.8463471 +0000 UTC m=+8.728928219" May 13 23:56:53.520998 containerd[1508]: time="2025-05-13T23:56:53.520915659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:53.525066 containerd[1508]: time="2025-05-13T23:56:53.524901631Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 23:56:53.533157 containerd[1508]: time="2025-05-13T23:56:53.531223854Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:53.540850 containerd[1508]: time="2025-05-13T23:56:53.540565751Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:56:53.542481 containerd[1508]: time="2025-05-13T23:56:53.541769188Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 4.250031381s" May 13 23:56:53.542481 containerd[1508]: time="2025-05-13T23:56:53.541985808Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 23:56:53.549015 containerd[1508]: time="2025-05-13T23:56:53.548934637Z" level=info msg="CreateContainer within sandbox \"fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:56:53.577514 containerd[1508]: time="2025-05-13T23:56:53.576701117Z" level=info msg="Container 24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65: CDI devices from CRI Config.CDIDevices: []" May 13 23:56:53.605844 containerd[1508]: time="2025-05-13T23:56:53.605739405Z" level=info msg="CreateContainer within sandbox \"fd421cae11bc306248672ffccbf308d39b693c5905cba9fe8b0966bfdb559c6d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65\"" May 13 23:56:53.610304 containerd[1508]: time="2025-05-13T23:56:53.610232827Z" level=info msg="StartContainer for \"24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65\"" May 13 23:56:53.611974 containerd[1508]: time="2025-05-13T23:56:53.611471592Z" level=info msg="connecting to shim 24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65" address="unix:///run/containerd/s/0b0b68065c4348ba1394349cc19d34254e856480e1d498bd8c8d2be9dc4a0025" protocol=ttrpc version=3 May 13 23:56:53.682459 systemd[1]: Started cri-containerd-24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65.scope - libcontainer container 24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65. May 13 23:56:53.800528 containerd[1508]: time="2025-05-13T23:56:53.800367588Z" level=info msg="StartContainer for \"24e780a6af7ee25c5adc3d39e1ddb007fa6fba02f001014ebeba7dea1edf6e65\" returns successfully" May 13 23:56:54.484799 kubelet[2618]: I0513 23:56:54.484231 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-pbfsc" podStartSLOduration=2.227050443 podStartE2EDuration="6.484205016s" podCreationTimestamp="2025-05-13 23:56:48 +0000 UTC" firstStartedPulling="2025-05-13 23:56:49.288847685 +0000 UTC m=+5.171428814" lastFinishedPulling="2025-05-13 23:56:53.546002258 +0000 UTC m=+9.428583387" observedRunningTime="2025-05-13 23:56:54.442421092 +0000 UTC m=+10.325002221" watchObservedRunningTime="2025-05-13 23:56:54.484205016 +0000 UTC m=+10.366786145" May 13 23:56:57.534146 systemd[1]: Created slice kubepods-besteffort-podefbc45b3_c90e_45a6_b812_e894cff23b32.slice - libcontainer container kubepods-besteffort-podefbc45b3_c90e_45a6_b812_e894cff23b32.slice. May 13 23:56:57.543462 kubelet[2618]: I0513 23:56:57.543374 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmm9\" (UniqueName: \"kubernetes.io/projected/efbc45b3-c90e-45a6-b812-e894cff23b32-kube-api-access-wfmm9\") pod \"calico-typha-86cc944ccb-jdm5g\" (UID: \"efbc45b3-c90e-45a6-b812-e894cff23b32\") " pod="calico-system/calico-typha-86cc944ccb-jdm5g" May 13 23:56:57.543462 kubelet[2618]: I0513 23:56:57.543453 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efbc45b3-c90e-45a6-b812-e894cff23b32-tigera-ca-bundle\") pod \"calico-typha-86cc944ccb-jdm5g\" (UID: \"efbc45b3-c90e-45a6-b812-e894cff23b32\") " pod="calico-system/calico-typha-86cc944ccb-jdm5g" May 13 23:56:57.543988 kubelet[2618]: I0513 23:56:57.543478 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/efbc45b3-c90e-45a6-b812-e894cff23b32-typha-certs\") pod \"calico-typha-86cc944ccb-jdm5g\" (UID: \"efbc45b3-c90e-45a6-b812-e894cff23b32\") " pod="calico-system/calico-typha-86cc944ccb-jdm5g" May 13 23:56:57.823107 systemd[1]: Created slice kubepods-besteffort-podfa7d3570_a6f7_404c_8b38_c122e7c2308c.slice - libcontainer container kubepods-besteffort-podfa7d3570_a6f7_404c_8b38_c122e7c2308c.slice. May 13 23:56:57.847211 kubelet[2618]: I0513 23:56:57.845664 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa7d3570-a6f7-404c-8b38-c122e7c2308c-node-certs\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847211 kubelet[2618]: I0513 23:56:57.845715 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa7d3570-a6f7-404c-8b38-c122e7c2308c-tigera-ca-bundle\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847211 kubelet[2618]: I0513 23:56:57.845736 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqq9\" (UniqueName: \"kubernetes.io/projected/fa7d3570-a6f7-404c-8b38-c122e7c2308c-kube-api-access-tgqq9\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847211 kubelet[2618]: I0513 23:56:57.845758 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-var-run-calico\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847211 kubelet[2618]: I0513 23:56:57.845777 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-var-lib-calico\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847841 kubelet[2618]: I0513 23:56:57.845800 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-flexvol-driver-host\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847841 kubelet[2618]: I0513 23:56:57.845817 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-xtables-lock\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847841 kubelet[2618]: I0513 23:56:57.845837 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-lib-modules\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847841 kubelet[2618]: I0513 23:56:57.845855 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-policysync\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.847841 kubelet[2618]: I0513 23:56:57.845872 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-cni-log-dir\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.848020 containerd[1508]: time="2025-05-13T23:56:57.847625413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86cc944ccb-jdm5g,Uid:efbc45b3-c90e-45a6-b812-e894cff23b32,Namespace:calico-system,Attempt:0,}" May 13 23:56:57.848494 kubelet[2618]: I0513 23:56:57.845891 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-cni-bin-dir\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.848494 kubelet[2618]: I0513 23:56:57.845909 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa7d3570-a6f7-404c-8b38-c122e7c2308c-cni-net-dir\") pod \"calico-node-9mz9r\" (UID: \"fa7d3570-a6f7-404c-8b38-c122e7c2308c\") " pod="calico-system/calico-node-9mz9r" May 13 23:56:57.955748 kubelet[2618]: E0513 23:56:57.955667 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:57.955941 kubelet[2618]: W0513 23:56:57.955705 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:57.955941 kubelet[2618]: E0513 23:56:57.955826 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:57.960206 containerd[1508]: time="2025-05-13T23:56:57.959981393Z" level=info msg="connecting to shim 247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835" address="unix:///run/containerd/s/8ea2685d4ff2dd8a8b32858b83121660cf17e7673805c8389a4c857aa11d87fe" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:57.960490 kubelet[2618]: E0513 23:56:57.960243 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:57.960490 kubelet[2618]: W0513 23:56:57.960262 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:57.960490 kubelet[2618]: E0513 23:56:57.960300 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:57.998429 kubelet[2618]: E0513 23:56:57.998344 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:56:58.024438 systemd[1]: Started cri-containerd-247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835.scope - libcontainer container 247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835. May 13 23:56:58.026053 kubelet[2618]: E0513 23:56:58.026031 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.026546 kubelet[2618]: W0513 23:56:58.026462 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.026546 kubelet[2618]: E0513 23:56:58.026497 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.069765 kubelet[2618]: E0513 23:56:58.069529 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.069765 kubelet[2618]: W0513 23:56:58.069565 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.069765 kubelet[2618]: E0513 23:56:58.069593 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.078154 kubelet[2618]: E0513 23:56:58.077015 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.085499 kubelet[2618]: W0513 23:56:58.079657 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.085499 kubelet[2618]: E0513 23:56:58.082530 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.088512 kubelet[2618]: E0513 23:56:58.087885 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.088512 kubelet[2618]: W0513 23:56:58.087914 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.088512 kubelet[2618]: E0513 23:56:58.087944 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.093512 kubelet[2618]: E0513 23:56:58.093130 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.093512 kubelet[2618]: W0513 23:56:58.093172 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.093512 kubelet[2618]: E0513 23:56:58.093205 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.093978 kubelet[2618]: E0513 23:56:58.093717 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.093978 kubelet[2618]: W0513 23:56:58.093728 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.093978 kubelet[2618]: E0513 23:56:58.093739 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.094589 kubelet[2618]: E0513 23:56:58.094184 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.094589 kubelet[2618]: W0513 23:56:58.094198 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.094589 kubelet[2618]: E0513 23:56:58.094213 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.094982 kubelet[2618]: E0513 23:56:58.094791 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.094982 kubelet[2618]: W0513 23:56:58.094803 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.094982 kubelet[2618]: E0513 23:56:58.094814 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.098839 kubelet[2618]: E0513 23:56:58.096897 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.098839 kubelet[2618]: W0513 23:56:58.096919 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.098839 kubelet[2618]: E0513 23:56:58.096941 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.098839 kubelet[2618]: E0513 23:56:58.098927 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.098839 kubelet[2618]: W0513 23:56:58.104947 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.098839 kubelet[2618]: E0513 23:56:58.104993 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.108974 kubelet[2618]: E0513 23:56:58.108560 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.108974 kubelet[2618]: W0513 23:56:58.108588 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.108974 kubelet[2618]: E0513 23:56:58.108618 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.120533 kubelet[2618]: E0513 23:56:58.120278 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.120533 kubelet[2618]: W0513 23:56:58.120314 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.120533 kubelet[2618]: E0513 23:56:58.120348 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.123309 kubelet[2618]: E0513 23:56:58.123275 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.123605 kubelet[2618]: W0513 23:56:58.123461 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.123605 kubelet[2618]: E0513 23:56:58.123505 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.127700 kubelet[2618]: E0513 23:56:58.127671 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.127952 kubelet[2618]: W0513 23:56:58.127839 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.127952 kubelet[2618]: E0513 23:56:58.127871 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.142993 kubelet[2618]: E0513 23:56:58.142600 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.142993 kubelet[2618]: W0513 23:56:58.142634 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.142993 kubelet[2618]: E0513 23:56:58.142710 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.144520 kubelet[2618]: E0513 23:56:58.144498 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.145346 kubelet[2618]: W0513 23:56:58.145011 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.145346 kubelet[2618]: E0513 23:56:58.145044 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.148550 kubelet[2618]: E0513 23:56:58.148520 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.148830 kubelet[2618]: W0513 23:56:58.148750 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.148830 kubelet[2618]: E0513 23:56:58.148779 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.150904 kubelet[2618]: E0513 23:56:58.150631 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.151672 kubelet[2618]: W0513 23:56:58.150824 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.151672 kubelet[2618]: E0513 23:56:58.151423 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.152770 kubelet[2618]: E0513 23:56:58.152626 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.152770 kubelet[2618]: W0513 23:56:58.152645 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.152770 kubelet[2618]: E0513 23:56:58.152658 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.167371 kubelet[2618]: E0513 23:56:58.167306 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.167920 kubelet[2618]: W0513 23:56:58.167458 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.167920 kubelet[2618]: E0513 23:56:58.167496 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.168653 kubelet[2618]: E0513 23:56:58.168638 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.168887 kubelet[2618]: W0513 23:56:58.168848 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.169005 kubelet[2618]: E0513 23:56:58.168969 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.180472 kubelet[2618]: E0513 23:56:58.180412 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.180777 kubelet[2618]: W0513 23:56:58.180757 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.180905 kubelet[2618]: E0513 23:56:58.180866 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.180957 containerd[1508]: time="2025-05-13T23:56:58.180893991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mz9r,Uid:fa7d3570-a6f7-404c-8b38-c122e7c2308c,Namespace:calico-system,Attempt:0,}" May 13 23:56:58.188685 kubelet[2618]: I0513 23:56:58.188506 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c772b54-e083-47fe-b4ff-69706c6198e1-varrun\") pod \"csi-node-driver-vlkmq\" (UID: \"7c772b54-e083-47fe-b4ff-69706c6198e1\") " pod="calico-system/csi-node-driver-vlkmq" May 13 23:56:58.188935 kubelet[2618]: E0513 23:56:58.188898 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.189018 kubelet[2618]: W0513 23:56:58.188982 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.189182 kubelet[2618]: E0513 23:56:58.189145 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.191820 kubelet[2618]: E0513 23:56:58.191790 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.192026 kubelet[2618]: W0513 23:56:58.192003 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.194075 kubelet[2618]: E0513 23:56:58.192961 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.194075 kubelet[2618]: E0513 23:56:58.193492 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.194075 kubelet[2618]: W0513 23:56:58.193506 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.194075 kubelet[2618]: E0513 23:56:58.193527 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.194075 kubelet[2618]: I0513 23:56:58.193565 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c772b54-e083-47fe-b4ff-69706c6198e1-kubelet-dir\") pod \"csi-node-driver-vlkmq\" (UID: \"7c772b54-e083-47fe-b4ff-69706c6198e1\") " pod="calico-system/csi-node-driver-vlkmq" May 13 23:56:58.197560 kubelet[2618]: E0513 23:56:58.194899 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.197560 kubelet[2618]: W0513 23:56:58.194918 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.199830 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.203600 kubelet[2618]: I0513 23:56:58.200451 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c772b54-e083-47fe-b4ff-69706c6198e1-socket-dir\") pod \"csi-node-driver-vlkmq\" (UID: \"7c772b54-e083-47fe-b4ff-69706c6198e1\") " pod="calico-system/csi-node-driver-vlkmq" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.200790 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.203600 kubelet[2618]: W0513 23:56:58.200806 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.200953 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.201508 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.203600 kubelet[2618]: W0513 23:56:58.201545 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.201558 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.203600 kubelet[2618]: E0513 23:56:58.202175 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.204046 kubelet[2618]: W0513 23:56:58.202190 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.204046 kubelet[2618]: E0513 23:56:58.202202 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.204046 kubelet[2618]: E0513 23:56:58.202888 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.204046 kubelet[2618]: W0513 23:56:58.202900 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.204046 kubelet[2618]: E0513 23:56:58.203060 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.214892 kubelet[2618]: E0513 23:56:58.210746 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.214892 kubelet[2618]: W0513 23:56:58.210782 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.214892 kubelet[2618]: E0513 23:56:58.210818 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.214892 kubelet[2618]: I0513 23:56:58.210878 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c772b54-e083-47fe-b4ff-69706c6198e1-registration-dir\") pod \"csi-node-driver-vlkmq\" (UID: \"7c772b54-e083-47fe-b4ff-69706c6198e1\") " pod="calico-system/csi-node-driver-vlkmq" May 13 23:56:58.217587 kubelet[2618]: E0513 23:56:58.216714 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.217587 kubelet[2618]: W0513 23:56:58.216749 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.217587 kubelet[2618]: E0513 23:56:58.216788 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.217587 kubelet[2618]: I0513 23:56:58.216831 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfcw8\" (UniqueName: \"kubernetes.io/projected/7c772b54-e083-47fe-b4ff-69706c6198e1-kube-api-access-nfcw8\") pod \"csi-node-driver-vlkmq\" (UID: \"7c772b54-e083-47fe-b4ff-69706c6198e1\") " pod="calico-system/csi-node-driver-vlkmq" May 13 23:56:58.222302 kubelet[2618]: E0513 23:56:58.219142 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.222302 kubelet[2618]: W0513 23:56:58.219172 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.222302 kubelet[2618]: E0513 23:56:58.219397 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.225540 kubelet[2618]: E0513 23:56:58.225487 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.225540 kubelet[2618]: W0513 23:56:58.225523 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.225850 kubelet[2618]: E0513 23:56:58.225560 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.226683 kubelet[2618]: E0513 23:56:58.226642 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.226683 kubelet[2618]: W0513 23:56:58.226662 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.226683 kubelet[2618]: E0513 23:56:58.226679 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.226948 kubelet[2618]: E0513 23:56:58.226927 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.226948 kubelet[2618]: W0513 23:56:58.226944 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.227049 kubelet[2618]: E0513 23:56:58.226955 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.267993 containerd[1508]: time="2025-05-13T23:56:58.267149221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86cc944ccb-jdm5g,Uid:efbc45b3-c90e-45a6-b812-e894cff23b32,Namespace:calico-system,Attempt:0,} returns sandbox id \"247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835\"" May 13 23:56:58.269198 containerd[1508]: time="2025-05-13T23:56:58.268955131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:56:58.331620 kubelet[2618]: E0513 23:56:58.329541 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.331620 kubelet[2618]: W0513 23:56:58.329568 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.331620 kubelet[2618]: E0513 23:56:58.329594 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.332831 kubelet[2618]: E0513 23:56:58.332619 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.332831 kubelet[2618]: W0513 23:56:58.332638 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.332831 kubelet[2618]: E0513 23:56:58.332658 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.344469 kubelet[2618]: E0513 23:56:58.334947 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.344469 kubelet[2618]: W0513 23:56:58.334971 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.344980 kubelet[2618]: E0513 23:56:58.344736 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.345587 kubelet[2618]: E0513 23:56:58.345485 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.345587 kubelet[2618]: W0513 23:56:58.345508 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.345859 kubelet[2618]: E0513 23:56:58.345648 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.345859 kubelet[2618]: E0513 23:56:58.345821 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.345859 kubelet[2618]: W0513 23:56:58.345831 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.346133 kubelet[2618]: E0513 23:56:58.346097 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.346908 kubelet[2618]: E0513 23:56:58.346252 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.346908 kubelet[2618]: W0513 23:56:58.346265 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.346908 kubelet[2618]: E0513 23:56:58.346350 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.346908 kubelet[2618]: E0513 23:56:58.346835 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.346908 kubelet[2618]: W0513 23:56:58.346847 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.347441 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.348570 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.351309 kubelet[2618]: W0513 23:56:58.348585 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.348769 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.349264 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.351309 kubelet[2618]: W0513 23:56:58.349275 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.349341 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.349673 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.351309 kubelet[2618]: W0513 23:56:58.349684 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.351309 kubelet[2618]: E0513 23:56:58.349872 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.351607 kubelet[2618]: E0513 23:56:58.350994 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.351607 kubelet[2618]: W0513 23:56:58.351004 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.354170 kubelet[2618]: E0513 23:56:58.353994 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.354636 kubelet[2618]: E0513 23:56:58.354602 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.354685 kubelet[2618]: W0513 23:56:58.354666 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.356238 kubelet[2618]: E0513 23:56:58.354793 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.356238 kubelet[2618]: E0513 23:56:58.355620 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.356554 kubelet[2618]: W0513 23:56:58.356534 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.356709 kubelet[2618]: E0513 23:56:58.356691 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.359292 containerd[1508]: time="2025-05-13T23:56:58.358859098Z" level=info msg="connecting to shim 8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a" address="unix:///run/containerd/s/98779629642c115391577e141f02514ec192d8ebfd48e12a2ee1550b2fc41076" namespace=k8s.io protocol=ttrpc version=3 May 13 23:56:58.359722 kubelet[2618]: E0513 23:56:58.359658 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.359939 kubelet[2618]: W0513 23:56:58.359724 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.364123 kubelet[2618]: E0513 23:56:58.362436 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.365352 kubelet[2618]: E0513 23:56:58.364587 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.365352 kubelet[2618]: W0513 23:56:58.364611 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.365352 kubelet[2618]: E0513 23:56:58.365047 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.366137 kubelet[2618]: E0513 23:56:58.365544 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.366137 kubelet[2618]: W0513 23:56:58.365554 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.366137 kubelet[2618]: E0513 23:56:58.365696 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.368183 kubelet[2618]: E0513 23:56:58.368131 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.368183 kubelet[2618]: W0513 23:56:58.368146 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.368318 kubelet[2618]: E0513 23:56:58.368271 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.373147 kubelet[2618]: E0513 23:56:58.369884 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.373147 kubelet[2618]: W0513 23:56:58.369924 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.373147 kubelet[2618]: E0513 23:56:58.370023 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.373147 kubelet[2618]: E0513 23:56:58.370449 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.373147 kubelet[2618]: W0513 23:56:58.370486 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.373147 kubelet[2618]: E0513 23:56:58.370544 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.374875 kubelet[2618]: E0513 23:56:58.373765 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.374955 kubelet[2618]: W0513 23:56:58.374883 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.375391 kubelet[2618]: E0513 23:56:58.375017 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.379132 kubelet[2618]: E0513 23:56:58.378935 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.379132 kubelet[2618]: W0513 23:56:58.378962 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.379132 kubelet[2618]: E0513 23:56:58.379096 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.379634 kubelet[2618]: E0513 23:56:58.379601 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.379634 kubelet[2618]: W0513 23:56:58.379621 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.379744 kubelet[2618]: E0513 23:56:58.379691 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.381313 kubelet[2618]: E0513 23:56:58.381278 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.381313 kubelet[2618]: W0513 23:56:58.381301 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.381447 kubelet[2618]: E0513 23:56:58.381387 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.381852 kubelet[2618]: E0513 23:56:58.381823 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.381852 kubelet[2618]: W0513 23:56:58.381841 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.381963 kubelet[2618]: E0513 23:56:58.381869 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.382538 kubelet[2618]: E0513 23:56:58.382513 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.382731 kubelet[2618]: W0513 23:56:58.382683 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.382731 kubelet[2618]: E0513 23:56:58.382702 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.404899 kubelet[2618]: E0513 23:56:58.404788 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:56:58.404899 kubelet[2618]: W0513 23:56:58.404818 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:56:58.404899 kubelet[2618]: E0513 23:56:58.404844 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:56:58.437062 systemd[1]: Started cri-containerd-8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a.scope - libcontainer container 8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a. May 13 23:56:58.500835 containerd[1508]: time="2025-05-13T23:56:58.500399279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mz9r,Uid:fa7d3570-a6f7-404c-8b38-c122e7c2308c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\"" May 13 23:57:00.307144 kubelet[2618]: E0513 23:57:00.304292 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:02.305879 kubelet[2618]: E0513 23:57:02.305764 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:04.310047 kubelet[2618]: E0513 23:57:04.309043 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:04.426256 containerd[1508]: time="2025-05-13T23:57:04.426166690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.430019 containerd[1508]: time="2025-05-13T23:57:04.429174847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 23:57:04.434717 containerd[1508]: time="2025-05-13T23:57:04.434649363Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.452519 containerd[1508]: time="2025-05-13T23:57:04.450047686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.452519 containerd[1508]: time="2025-05-13T23:57:04.451913662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 6.182921602s" May 13 23:57:04.452519 containerd[1508]: time="2025-05-13T23:57:04.451971090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 23:57:04.456210 containerd[1508]: time="2025-05-13T23:57:04.455906475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:57:04.480237 containerd[1508]: time="2025-05-13T23:57:04.478442468Z" level=info msg="CreateContainer within sandbox \"247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:57:04.526097 containerd[1508]: time="2025-05-13T23:57:04.526019443Z" level=info msg="Container 6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:04.564745 containerd[1508]: time="2025-05-13T23:57:04.564257921Z" level=info msg="CreateContainer within sandbox \"247936fcee7a3817cd9ee840bdca7c8e38fad830f337384312ff159e821b8835\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375\"" May 13 23:57:04.565393 containerd[1508]: time="2025-05-13T23:57:04.565295165Z" level=info msg="StartContainer for \"6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375\"" May 13 23:57:04.567976 containerd[1508]: time="2025-05-13T23:57:04.567271349Z" level=info msg="connecting to shim 6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375" address="unix:///run/containerd/s/8ea2685d4ff2dd8a8b32858b83121660cf17e7673805c8389a4c857aa11d87fe" protocol=ttrpc version=3 May 13 23:57:04.604290 systemd[1]: Started cri-containerd-6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375.scope - libcontainer container 6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375. May 13 23:57:04.688443 containerd[1508]: time="2025-05-13T23:57:04.688290745Z" level=info msg="StartContainer for \"6584bd2aff3daac56959852e23bc73f1a6860ecbbcd85bee3907593a6798d375\" returns successfully" May 13 23:57:05.513929 kubelet[2618]: I0513 23:57:05.513794 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86cc944ccb-jdm5g" podStartSLOduration=2.327563536 podStartE2EDuration="8.513772162s" podCreationTimestamp="2025-05-13 23:56:57 +0000 UTC" firstStartedPulling="2025-05-13 23:56:58.268516392 +0000 UTC m=+14.151097512" lastFinishedPulling="2025-05-13 23:57:04.454725019 +0000 UTC m=+20.337306138" observedRunningTime="2025-05-13 23:57:05.512178249 +0000 UTC m=+21.394759368" watchObservedRunningTime="2025-05-13 23:57:05.513772162 +0000 UTC m=+21.396353301" May 13 23:57:05.522775 kubelet[2618]: E0513 23:57:05.522726 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.522775 kubelet[2618]: W0513 23:57:05.522761 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.522775 kubelet[2618]: E0513 23:57:05.522788 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.523125 kubelet[2618]: E0513 23:57:05.523106 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.523125 kubelet[2618]: W0513 23:57:05.523122 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.523209 kubelet[2618]: E0513 23:57:05.523134 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.523406 kubelet[2618]: E0513 23:57:05.523387 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.523406 kubelet[2618]: W0513 23:57:05.523402 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.523491 kubelet[2618]: E0513 23:57:05.523413 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.523762 kubelet[2618]: E0513 23:57:05.523738 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.523762 kubelet[2618]: W0513 23:57:05.523753 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.523844 kubelet[2618]: E0513 23:57:05.523765 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.524054 kubelet[2618]: E0513 23:57:05.524035 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.524054 kubelet[2618]: W0513 23:57:05.524051 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.524135 kubelet[2618]: E0513 23:57:05.524064 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.524407 kubelet[2618]: E0513 23:57:05.524378 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.524407 kubelet[2618]: W0513 23:57:05.524397 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.524482 kubelet[2618]: E0513 23:57:05.524408 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.524684 kubelet[2618]: E0513 23:57:05.524661 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.524684 kubelet[2618]: W0513 23:57:05.524679 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.524780 kubelet[2618]: E0513 23:57:05.524692 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.524934 kubelet[2618]: E0513 23:57:05.524916 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.524934 kubelet[2618]: W0513 23:57:05.524930 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.525033 kubelet[2618]: E0513 23:57:05.524941 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.525219 kubelet[2618]: E0513 23:57:05.525200 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.525219 kubelet[2618]: W0513 23:57:05.525216 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.525294 kubelet[2618]: E0513 23:57:05.525228 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.525656 kubelet[2618]: E0513 23:57:05.525628 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.525656 kubelet[2618]: W0513 23:57:05.525645 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.525736 kubelet[2618]: E0513 23:57:05.525658 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.526306 kubelet[2618]: E0513 23:57:05.526280 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.526306 kubelet[2618]: W0513 23:57:05.526296 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.526403 kubelet[2618]: E0513 23:57:05.526308 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.526595 kubelet[2618]: E0513 23:57:05.526570 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.526595 kubelet[2618]: W0513 23:57:05.526585 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.526766 kubelet[2618]: E0513 23:57:05.526596 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.526827 kubelet[2618]: E0513 23:57:05.526810 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.526827 kubelet[2618]: W0513 23:57:05.526823 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.526896 kubelet[2618]: E0513 23:57:05.526835 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.527750 kubelet[2618]: E0513 23:57:05.527300 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.527750 kubelet[2618]: W0513 23:57:05.527335 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.527750 kubelet[2618]: E0513 23:57:05.527368 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.527750 kubelet[2618]: E0513 23:57:05.527677 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.527750 kubelet[2618]: W0513 23:57:05.527686 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.527750 kubelet[2618]: E0513 23:57:05.527698 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.578880 kubelet[2618]: E0513 23:57:05.578563 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.578880 kubelet[2618]: W0513 23:57:05.578605 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.578880 kubelet[2618]: E0513 23:57:05.578636 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.579244 kubelet[2618]: E0513 23:57:05.579212 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.579244 kubelet[2618]: W0513 23:57:05.579232 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.579244 kubelet[2618]: E0513 23:57:05.579245 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.579601 kubelet[2618]: E0513 23:57:05.579501 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.579601 kubelet[2618]: W0513 23:57:05.579519 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.579601 kubelet[2618]: E0513 23:57:05.579532 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.579781 kubelet[2618]: E0513 23:57:05.579760 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.579781 kubelet[2618]: W0513 23:57:05.579777 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.579864 kubelet[2618]: E0513 23:57:05.579790 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.580585 kubelet[2618]: E0513 23:57:05.580567 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.580774 kubelet[2618]: W0513 23:57:05.580664 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.580774 kubelet[2618]: E0513 23:57:05.580682 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.581003 kubelet[2618]: E0513 23:57:05.580989 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.581203 kubelet[2618]: W0513 23:57:05.581059 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.581203 kubelet[2618]: E0513 23:57:05.581072 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.581425 kubelet[2618]: E0513 23:57:05.581411 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.581511 kubelet[2618]: W0513 23:57:05.581497 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.583342 kubelet[2618]: E0513 23:57:05.581638 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.583342 kubelet[2618]: E0513 23:57:05.581819 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.590474 kubelet[2618]: W0513 23:57:05.585245 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.590474 kubelet[2618]: E0513 23:57:05.588672 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.593730 kubelet[2618]: E0513 23:57:05.593273 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.593730 kubelet[2618]: W0513 23:57:05.593313 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.593730 kubelet[2618]: E0513 23:57:05.593481 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.594071 kubelet[2618]: E0513 23:57:05.593796 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.594071 kubelet[2618]: W0513 23:57:05.593810 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.594071 kubelet[2618]: E0513 23:57:05.593852 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.594071 kubelet[2618]: E0513 23:57:05.594069 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.594249 kubelet[2618]: W0513 23:57:05.594101 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.594249 kubelet[2618]: E0513 23:57:05.594219 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.594419 kubelet[2618]: E0513 23:57:05.594378 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.594419 kubelet[2618]: W0513 23:57:05.594394 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.594419 kubelet[2618]: E0513 23:57:05.594409 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.594755 kubelet[2618]: E0513 23:57:05.594697 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.594755 kubelet[2618]: W0513 23:57:05.594724 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.594755 kubelet[2618]: E0513 23:57:05.594744 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.595884 kubelet[2618]: E0513 23:57:05.595164 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.595884 kubelet[2618]: W0513 23:57:05.595176 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.595884 kubelet[2618]: E0513 23:57:05.595196 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.595884 kubelet[2618]: E0513 23:57:05.595578 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.595884 kubelet[2618]: W0513 23:57:05.595593 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.595884 kubelet[2618]: E0513 23:57:05.595608 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.596159 kubelet[2618]: E0513 23:57:05.596022 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.596159 kubelet[2618]: W0513 23:57:05.596036 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.596159 kubelet[2618]: E0513 23:57:05.596057 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.597512 kubelet[2618]: E0513 23:57:05.597469 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.597512 kubelet[2618]: W0513 23:57:05.597496 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.597626 kubelet[2618]: E0513 23:57:05.597517 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:05.598136 kubelet[2618]: E0513 23:57:05.598112 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:05.598136 kubelet[2618]: W0513 23:57:05.598128 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:05.598280 kubelet[2618]: E0513 23:57:05.598139 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.305140 kubelet[2618]: E0513 23:57:06.304873 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:06.413713 containerd[1508]: time="2025-05-13T23:57:06.413532957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:06.418160 containerd[1508]: time="2025-05-13T23:57:06.417049851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 23:57:06.422296 containerd[1508]: time="2025-05-13T23:57:06.422235617Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:06.434510 containerd[1508]: time="2025-05-13T23:57:06.433685179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:06.434668 containerd[1508]: time="2025-05-13T23:57:06.434498351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.978536191s" May 13 23:57:06.434668 containerd[1508]: time="2025-05-13T23:57:06.434559897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 23:57:06.444113 containerd[1508]: time="2025-05-13T23:57:06.441968770Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:57:06.507060 kubelet[2618]: I0513 23:57:06.506995 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:57:06.507359 containerd[1508]: time="2025-05-13T23:57:06.507295807Z" level=info msg="Container ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:06.512532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475653143.mount: Deactivated successfully. May 13 23:57:06.550637 containerd[1508]: time="2025-05-13T23:57:06.549661343Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\"" May 13 23:57:06.551151 kubelet[2618]: E0513 23:57:06.550649 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.551151 kubelet[2618]: W0513 23:57:06.550681 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.551151 kubelet[2618]: E0513 23:57:06.550715 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.551151 kubelet[2618]: E0513 23:57:06.551036 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.551151 kubelet[2618]: W0513 23:57:06.551048 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.551151 kubelet[2618]: E0513 23:57:06.551059 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.552461 kubelet[2618]: E0513 23:57:06.551484 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.552461 kubelet[2618]: W0513 23:57:06.551497 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.552461 kubelet[2618]: E0513 23:57:06.552312 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.552577 containerd[1508]: time="2025-05-13T23:57:06.551838653Z" level=info msg="StartContainer for \"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\"" May 13 23:57:06.552861 kubelet[2618]: E0513 23:57:06.552816 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.552913 kubelet[2618]: W0513 23:57:06.552902 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.552913 kubelet[2618]: E0513 23:57:06.552917 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.556626 containerd[1508]: time="2025-05-13T23:57:06.556336234Z" level=info msg="connecting to shim ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6" address="unix:///run/containerd/s/98779629642c115391577e141f02514ec192d8ebfd48e12a2ee1550b2fc41076" protocol=ttrpc version=3 May 13 23:57:06.557619 kubelet[2618]: E0513 23:57:06.557329 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.557619 kubelet[2618]: W0513 23:57:06.557374 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.557619 kubelet[2618]: E0513 23:57:06.557412 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.565918 kubelet[2618]: E0513 23:57:06.565821 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.566308 kubelet[2618]: W0513 23:57:06.566246 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.566657 kubelet[2618]: E0513 23:57:06.566618 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.567514 kubelet[2618]: E0513 23:57:06.567454 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.567689 kubelet[2618]: W0513 23:57:06.567481 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.567689 kubelet[2618]: E0513 23:57:06.567622 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.568471 kubelet[2618]: E0513 23:57:06.568402 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.568471 kubelet[2618]: W0513 23:57:06.568418 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.568471 kubelet[2618]: E0513 23:57:06.568432 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.569730 kubelet[2618]: E0513 23:57:06.569683 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.569730 kubelet[2618]: W0513 23:57:06.569699 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.569930 kubelet[2618]: E0513 23:57:06.569864 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.570358 kubelet[2618]: E0513 23:57:06.570328 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.570461 kubelet[2618]: W0513 23:57:06.570445 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.570627 kubelet[2618]: E0513 23:57:06.570543 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.571248 kubelet[2618]: E0513 23:57:06.571232 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.571459 kubelet[2618]: W0513 23:57:06.571379 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.571537 kubelet[2618]: E0513 23:57:06.571413 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.578145 kubelet[2618]: E0513 23:57:06.578056 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.578145 kubelet[2618]: W0513 23:57:06.578121 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.578406 kubelet[2618]: E0513 23:57:06.578171 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.581466 kubelet[2618]: E0513 23:57:06.580998 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.581466 kubelet[2618]: W0513 23:57:06.581027 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.581466 kubelet[2618]: E0513 23:57:06.581050 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.588041 kubelet[2618]: E0513 23:57:06.587293 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.588041 kubelet[2618]: W0513 23:57:06.587326 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.588041 kubelet[2618]: E0513 23:57:06.587358 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.588988 kubelet[2618]: E0513 23:57:06.588722 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.588988 kubelet[2618]: W0513 23:57:06.588739 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.588988 kubelet[2618]: E0513 23:57:06.588753 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.599638 kubelet[2618]: E0513 23:57:06.599300 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.599638 kubelet[2618]: W0513 23:57:06.599336 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.599638 kubelet[2618]: E0513 23:57:06.599370 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.600165 kubelet[2618]: E0513 23:57:06.599935 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.600165 kubelet[2618]: W0513 23:57:06.599973 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.600165 kubelet[2618]: E0513 23:57:06.600055 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.600887 kubelet[2618]: E0513 23:57:06.600378 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.600887 kubelet[2618]: W0513 23:57:06.600412 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.600887 kubelet[2618]: E0513 23:57:06.600435 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.601784 kubelet[2618]: E0513 23:57:06.601754 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.601784 kubelet[2618]: W0513 23:57:06.601780 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.601908 kubelet[2618]: E0513 23:57:06.601816 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.602854 kubelet[2618]: E0513 23:57:06.602832 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.602854 kubelet[2618]: W0513 23:57:06.602849 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.602984 kubelet[2618]: E0513 23:57:06.602924 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.603325 kubelet[2618]: E0513 23:57:06.603307 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.603325 kubelet[2618]: W0513 23:57:06.603322 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.603489 kubelet[2618]: E0513 23:57:06.603467 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.603852 kubelet[2618]: E0513 23:57:06.603821 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.603852 kubelet[2618]: W0513 23:57:06.603837 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.604160 kubelet[2618]: E0513 23:57:06.603929 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.604160 kubelet[2618]: E0513 23:57:06.604131 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.604160 kubelet[2618]: W0513 23:57:06.604143 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.604265 kubelet[2618]: E0513 23:57:06.604167 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.606379 kubelet[2618]: E0513 23:57:06.605542 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.606379 kubelet[2618]: W0513 23:57:06.605562 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.606379 kubelet[2618]: E0513 23:57:06.605581 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.605648 systemd[1]: Started cri-containerd-ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6.scope - libcontainer container ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6. May 13 23:57:06.607147 kubelet[2618]: E0513 23:57:06.607126 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.607147 kubelet[2618]: W0513 23:57:06.607145 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.607258 kubelet[2618]: E0513 23:57:06.607241 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.608387 kubelet[2618]: E0513 23:57:06.608320 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.608387 kubelet[2618]: W0513 23:57:06.608339 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.608525 kubelet[2618]: E0513 23:57:06.608439 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.609181 kubelet[2618]: E0513 23:57:06.609157 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.609426 kubelet[2618]: W0513 23:57:06.609403 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.609561 kubelet[2618]: E0513 23:57:06.609538 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.610186 kubelet[2618]: E0513 23:57:06.610170 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.610343 kubelet[2618]: W0513 23:57:06.610273 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.610480 kubelet[2618]: E0513 23:57:06.610408 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.611073 kubelet[2618]: E0513 23:57:06.610891 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.611073 kubelet[2618]: W0513 23:57:06.610905 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.611073 kubelet[2618]: E0513 23:57:06.610925 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.611925 kubelet[2618]: E0513 23:57:06.611801 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.611925 kubelet[2618]: W0513 23:57:06.611819 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.611925 kubelet[2618]: E0513 23:57:06.611834 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.615356 kubelet[2618]: E0513 23:57:06.615319 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.615976 kubelet[2618]: W0513 23:57:06.615498 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.615976 kubelet[2618]: E0513 23:57:06.615545 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.616411 kubelet[2618]: E0513 23:57:06.616397 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.616505 kubelet[2618]: W0513 23:57:06.616490 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.616728 kubelet[2618]: E0513 23:57:06.616701 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.616899 kubelet[2618]: E0513 23:57:06.616886 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:57:06.616980 kubelet[2618]: W0513 23:57:06.616967 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:57:06.617044 kubelet[2618]: E0513 23:57:06.617032 2618 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:57:06.705882 systemd[1]: cri-containerd-ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6.scope: Deactivated successfully. May 13 23:57:06.707385 systemd[1]: cri-containerd-ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6.scope: Consumed 58ms CPU time, 8.1M memory peak, 4M written to disk. May 13 23:57:06.713578 containerd[1508]: time="2025-05-13T23:57:06.713470860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\" id:\"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\" pid:3316 exited_at:{seconds:1747180626 nanos:712697292}" May 13 23:57:06.718720 containerd[1508]: time="2025-05-13T23:57:06.718528183Z" level=info msg="received exit event container_id:\"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\" id:\"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\" pid:3316 exited_at:{seconds:1747180626 nanos:712697292}" May 13 23:57:06.723811 containerd[1508]: time="2025-05-13T23:57:06.723594044Z" level=info msg="StartContainer for \"ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6\" returns successfully" May 13 23:57:06.766035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ace762e8f616114fe84bdf74e5933cf526ab358740404339d762a0ddfa5602d6-rootfs.mount: Deactivated successfully. May 13 23:57:07.514074 containerd[1508]: time="2025-05-13T23:57:07.513668762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:57:08.306386 kubelet[2618]: E0513 23:57:08.306317 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:10.308646 kubelet[2618]: E0513 23:57:10.308579 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:12.305604 kubelet[2618]: E0513 23:57:12.305159 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:14.306231 kubelet[2618]: E0513 23:57:14.304999 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:16.375366 kubelet[2618]: E0513 23:57:16.372949 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:16.574454 containerd[1508]: time="2025-05-13T23:57:16.570396515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:16.584831 containerd[1508]: time="2025-05-13T23:57:16.578585832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 23:57:16.599384 containerd[1508]: time="2025-05-13T23:57:16.595766761Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:16.618714 containerd[1508]: time="2025-05-13T23:57:16.617941861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:16.618714 containerd[1508]: time="2025-05-13T23:57:16.618501854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 9.104782086s" May 13 23:57:16.618714 containerd[1508]: time="2025-05-13T23:57:16.618537010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 23:57:16.630182 containerd[1508]: time="2025-05-13T23:57:16.627681903Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:57:16.668148 containerd[1508]: time="2025-05-13T23:57:16.668005963Z" level=info msg="Container ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:16.714734 containerd[1508]: time="2025-05-13T23:57:16.714456219Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\"" May 13 23:57:16.721129 containerd[1508]: time="2025-05-13T23:57:16.718314252Z" level=info msg="StartContainer for \"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\"" May 13 23:57:16.724710 containerd[1508]: time="2025-05-13T23:57:16.724657979Z" level=info msg="connecting to shim ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f" address="unix:///run/containerd/s/98779629642c115391577e141f02514ec192d8ebfd48e12a2ee1550b2fc41076" protocol=ttrpc version=3 May 13 23:57:16.869702 systemd[1]: Started cri-containerd-ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f.scope - libcontainer container ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f. May 13 23:57:17.034827 containerd[1508]: time="2025-05-13T23:57:17.033250891Z" level=info msg="StartContainer for \"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\" returns successfully" May 13 23:57:18.918872 kubelet[2618]: E0513 23:57:18.917998 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:19.375754 systemd[1]: cri-containerd-ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f.scope: Deactivated successfully. May 13 23:57:19.376202 systemd[1]: cri-containerd-ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f.scope: Consumed 893ms CPU time, 161.4M memory peak, 8K read from disk, 154M written to disk. May 13 23:57:19.377390 containerd[1508]: time="2025-05-13T23:57:19.377284093Z" level=info msg="received exit event container_id:\"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\" id:\"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\" pid:3377 exited_at:{seconds:1747180639 nanos:376551456}" May 13 23:57:19.377824 containerd[1508]: time="2025-05-13T23:57:19.377595058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\" id:\"ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f\" pid:3377 exited_at:{seconds:1747180639 nanos:376551456}" May 13 23:57:19.404856 kubelet[2618]: I0513 23:57:19.404818 2618 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:57:19.410006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec719d6c6068d89e42504b6240c35d1ab80c80a6733bec9ea0344442287e204f-rootfs.mount: Deactivated successfully. May 13 23:57:19.657420 systemd[1]: Created slice kubepods-burstable-pod8e973511_c167_487d_8439_3dff1098a9bc.slice - libcontainer container kubepods-burstable-pod8e973511_c167_487d_8439_3dff1098a9bc.slice. May 13 23:57:19.667604 systemd[1]: Created slice kubepods-besteffort-poddc39f3d9_d309_4c9a_aed4_69ee89dd9ff5.slice - libcontainer container kubepods-besteffort-poddc39f3d9_d309_4c9a_aed4_69ee89dd9ff5.slice. May 13 23:57:19.673624 systemd[1]: Created slice kubepods-burstable-pod8e2a314a_57a9_4fe0_a76d_c30ae0bf3c1e.slice - libcontainer container kubepods-burstable-pod8e2a314a_57a9_4fe0_a76d_c30ae0bf3c1e.slice. May 13 23:57:19.680030 systemd[1]: Created slice kubepods-besteffort-pod86a2180a_f386_40eb_b31a_9655eeb5faa7.slice - libcontainer container kubepods-besteffort-pod86a2180a_f386_40eb_b31a_9655eeb5faa7.slice. May 13 23:57:19.685179 containerd[1508]: time="2025-05-13T23:57:19.685130153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:57:19.685895 systemd[1]: Created slice kubepods-besteffort-pod16e369ef_efb8_44a3_835c_07ff87a832ee.slice - libcontainer container kubepods-besteffort-pod16e369ef_efb8_44a3_835c_07ff87a832ee.slice. May 13 23:57:19.793572 kubelet[2618]: I0513 23:57:19.793479 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16e369ef-efb8-44a3-835c-07ff87a832ee-calico-apiserver-certs\") pod \"calico-apiserver-7fd4c66cdb-9pqzz\" (UID: \"16e369ef-efb8-44a3-835c-07ff87a832ee\") " pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" May 13 23:57:19.793572 kubelet[2618]: I0513 23:57:19.793542 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7z48\" (UniqueName: \"kubernetes.io/projected/dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5-kube-api-access-b7z48\") pod \"calico-apiserver-7fd4c66cdb-hv86l\" (UID: \"dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5\") " pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" May 13 23:57:19.793572 kubelet[2618]: I0513 23:57:19.793564 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6r8\" (UniqueName: \"kubernetes.io/projected/8e973511-c167-487d-8439-3dff1098a9bc-kube-api-access-dt6r8\") pod \"coredns-6f6b679f8f-q9kl4\" (UID: \"8e973511-c167-487d-8439-3dff1098a9bc\") " pod="kube-system/coredns-6f6b679f8f-q9kl4" May 13 23:57:19.793957 kubelet[2618]: I0513 23:57:19.793921 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5r5\" (UniqueName: \"kubernetes.io/projected/16e369ef-efb8-44a3-835c-07ff87a832ee-kube-api-access-qf5r5\") pod \"calico-apiserver-7fd4c66cdb-9pqzz\" (UID: \"16e369ef-efb8-44a3-835c-07ff87a832ee\") " pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" May 13 23:57:19.793957 kubelet[2618]: I0513 23:57:19.793955 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlj2z\" (UniqueName: \"kubernetes.io/projected/86a2180a-f386-40eb-b31a-9655eeb5faa7-kube-api-access-hlj2z\") pod \"calico-kube-controllers-94f549c-24cj7\" (UID: \"86a2180a-f386-40eb-b31a-9655eeb5faa7\") " pod="calico-system/calico-kube-controllers-94f549c-24cj7" May 13 23:57:19.794041 kubelet[2618]: I0513 23:57:19.793984 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtjv\" (UniqueName: \"kubernetes.io/projected/8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e-kube-api-access-nbtjv\") pod \"coredns-6f6b679f8f-m2jkb\" (UID: \"8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e\") " pod="kube-system/coredns-6f6b679f8f-m2jkb" May 13 23:57:19.794140 kubelet[2618]: I0513 23:57:19.794117 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a2180a-f386-40eb-b31a-9655eeb5faa7-tigera-ca-bundle\") pod \"calico-kube-controllers-94f549c-24cj7\" (UID: \"86a2180a-f386-40eb-b31a-9655eeb5faa7\") " pod="calico-system/calico-kube-controllers-94f549c-24cj7" May 13 23:57:19.794199 kubelet[2618]: I0513 23:57:19.794154 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5-calico-apiserver-certs\") pod \"calico-apiserver-7fd4c66cdb-hv86l\" (UID: \"dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5\") " pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" May 13 23:57:19.794199 kubelet[2618]: I0513 23:57:19.794183 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e-config-volume\") pod \"coredns-6f6b679f8f-m2jkb\" (UID: \"8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e\") " pod="kube-system/coredns-6f6b679f8f-m2jkb" May 13 23:57:19.794276 kubelet[2618]: I0513 23:57:19.794204 2618 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e973511-c167-487d-8439-3dff1098a9bc-config-volume\") pod \"coredns-6f6b679f8f-q9kl4\" (UID: \"8e973511-c167-487d-8439-3dff1098a9bc\") " pod="kube-system/coredns-6f6b679f8f-q9kl4" May 13 23:57:19.978025 containerd[1508]: time="2025-05-13T23:57:19.977981392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-hv86l,Uid:dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:57:19.978437 containerd[1508]: time="2025-05-13T23:57:19.978418474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m2jkb,Uid:8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e,Namespace:kube-system,Attempt:0,}" May 13 23:57:19.983989 containerd[1508]: time="2025-05-13T23:57:19.983967853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f549c-24cj7,Uid:86a2180a-f386-40eb-b31a-9655eeb5faa7,Namespace:calico-system,Attempt:0,}" May 13 23:57:19.990355 containerd[1508]: time="2025-05-13T23:57:19.990132419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-9pqzz,Uid:16e369ef-efb8-44a3-835c-07ff87a832ee,Namespace:calico-apiserver,Attempt:0,}" May 13 23:57:20.238656 containerd[1508]: time="2025-05-13T23:57:20.238324984Z" level=error msg="Failed to destroy network for sandbox \"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.241581 containerd[1508]: time="2025-05-13T23:57:20.241031489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-hv86l,Uid:dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.242389 kubelet[2618]: E0513 23:57:20.242119 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.242389 kubelet[2618]: E0513 23:57:20.242219 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" May 13 23:57:20.242389 kubelet[2618]: E0513 23:57:20.242244 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" May 13 23:57:20.243939 kubelet[2618]: E0513 23:57:20.242321 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd4c66cdb-hv86l_calico-apiserver(dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd4c66cdb-hv86l_calico-apiserver(dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575b8dc144e5a1b503c5eaa6dc6b158f616e088d27b889f5e492fee5beb0b388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" podUID="dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5" May 13 23:57:20.247544 containerd[1508]: time="2025-05-13T23:57:20.247466432Z" level=error msg="Failed to destroy network for sandbox \"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.247660 containerd[1508]: time="2025-05-13T23:57:20.247468746Z" level=error msg="Failed to destroy network for sandbox \"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.258695 containerd[1508]: time="2025-05-13T23:57:20.258613299Z" level=error msg="Failed to destroy network for sandbox \"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.266276 containerd[1508]: time="2025-05-13T23:57:20.266161033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9kl4,Uid:8e973511-c167-487d-8439-3dff1098a9bc,Namespace:kube-system,Attempt:0,}" May 13 23:57:20.302476 containerd[1508]: time="2025-05-13T23:57:20.302396376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f549c-24cj7,Uid:86a2180a-f386-40eb-b31a-9655eeb5faa7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.302772 kubelet[2618]: E0513 23:57:20.302697 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.302841 kubelet[2618]: E0513 23:57:20.302784 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f549c-24cj7" May 13 23:57:20.302841 kubelet[2618]: E0513 23:57:20.302808 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-94f549c-24cj7" May 13 23:57:20.302902 kubelet[2618]: E0513 23:57:20.302863 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-94f549c-24cj7_calico-system(86a2180a-f386-40eb-b31a-9655eeb5faa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-94f549c-24cj7_calico-system(86a2180a-f386-40eb-b31a-9655eeb5faa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3586b38087f7676c413088658ecf1cb1f5a4de9bc9b2d0fa9c721592084858be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-94f549c-24cj7" podUID="86a2180a-f386-40eb-b31a-9655eeb5faa7" May 13 23:57:20.309931 systemd[1]: Created slice kubepods-besteffort-pod7c772b54_e083_47fe_b4ff_69706c6198e1.slice - libcontainer container kubepods-besteffort-pod7c772b54_e083_47fe_b4ff_69706c6198e1.slice. May 13 23:57:20.312381 containerd[1508]: time="2025-05-13T23:57:20.312347947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlkmq,Uid:7c772b54-e083-47fe-b4ff-69706c6198e1,Namespace:calico-system,Attempt:0,}" May 13 23:57:20.368527 containerd[1508]: time="2025-05-13T23:57:20.368456756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m2jkb,Uid:8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.368826 kubelet[2618]: E0513 23:57:20.368760 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.368892 kubelet[2618]: E0513 23:57:20.368843 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-m2jkb" May 13 23:57:20.368892 kubelet[2618]: E0513 23:57:20.368868 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-m2jkb" May 13 23:57:20.368953 kubelet[2618]: E0513 23:57:20.368922 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-m2jkb_kube-system(8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-m2jkb_kube-system(8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7020c087f0d5c05eec999186433bad6111b194628568c57b1adea4754fded35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-m2jkb" podUID="8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e" May 13 23:57:20.406365 containerd[1508]: time="2025-05-13T23:57:20.406287948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-9pqzz,Uid:16e369ef-efb8-44a3-835c-07ff87a832ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.406882 kubelet[2618]: E0513 23:57:20.406566 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.406882 kubelet[2618]: E0513 23:57:20.406626 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" May 13 23:57:20.406882 kubelet[2618]: E0513 23:57:20.406653 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" May 13 23:57:20.406988 kubelet[2618]: E0513 23:57:20.406727 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd4c66cdb-9pqzz_calico-apiserver(16e369ef-efb8-44a3-835c-07ff87a832ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd4c66cdb-9pqzz_calico-apiserver(16e369ef-efb8-44a3-835c-07ff87a832ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce3657fb2d2189bcefb360d6f8ec0279a3de16c7acf6ea2a05dbf710f7b2b91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" podUID="16e369ef-efb8-44a3-835c-07ff87a832ee" May 13 23:57:20.465653 containerd[1508]: time="2025-05-13T23:57:20.465586312Z" level=error msg="Failed to destroy network for sandbox \"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.468780 systemd[1]: run-netns-cni\x2d6a17c8c2\x2d53fc\x2d9b05\x2dcda8\x2deaa5cabc8c68.mount: Deactivated successfully. May 13 23:57:20.477278 containerd[1508]: time="2025-05-13T23:57:20.477209454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9kl4,Uid:8e973511-c167-487d-8439-3dff1098a9bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.477520 kubelet[2618]: E0513 23:57:20.477467 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.477575 kubelet[2618]: E0513 23:57:20.477539 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-q9kl4" May 13 23:57:20.477575 kubelet[2618]: E0513 23:57:20.477566 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-q9kl4" May 13 23:57:20.477654 kubelet[2618]: E0513 23:57:20.477617 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-q9kl4_kube-system(8e973511-c167-487d-8439-3dff1098a9bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-q9kl4_kube-system(8e973511-c167-487d-8439-3dff1098a9bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83d5f40039e60ded7c92c940106a49df6b11ecbfbb485a6191cb4c26f26939a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-q9kl4" podUID="8e973511-c167-487d-8439-3dff1098a9bc" May 13 23:57:20.478473 containerd[1508]: time="2025-05-13T23:57:20.478424166Z" level=error msg="Failed to destroy network for sandbox \"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.481482 systemd[1]: run-netns-cni\x2d8aaec754\x2d8490\x2dd5c4\x2d963e\x2d9fb92cd94d67.mount: Deactivated successfully. May 13 23:57:20.498893 containerd[1508]: time="2025-05-13T23:57:20.498724845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlkmq,Uid:7c772b54-e083-47fe-b4ff-69706c6198e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.499058 kubelet[2618]: E0513 23:57:20.499013 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:57:20.499154 kubelet[2618]: E0513 23:57:20.499077 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlkmq" May 13 23:57:20.499202 kubelet[2618]: E0513 23:57:20.499153 2618 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlkmq" May 13 23:57:20.499239 kubelet[2618]: E0513 23:57:20.499198 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlkmq_calico-system(7c772b54-e083-47fe-b4ff-69706c6198e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlkmq_calico-system(7c772b54-e083-47fe-b4ff-69706c6198e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14b3989487dc3a602519161e928ac0bc74cbccc23f234f240bcf3d5e5becbec8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlkmq" podUID="7c772b54-e083-47fe-b4ff-69706c6198e1" May 13 23:57:25.449632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449854271.mount: Deactivated successfully. May 13 23:57:27.274677 containerd[1508]: time="2025-05-13T23:57:27.274576259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:27.281612 containerd[1508]: time="2025-05-13T23:57:27.281512447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 23:57:27.290146 containerd[1508]: time="2025-05-13T23:57:27.290048539Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:27.297149 containerd[1508]: time="2025-05-13T23:57:27.297046212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:27.297581 containerd[1508]: time="2025-05-13T23:57:27.297533157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.612350314s" May 13 23:57:27.297647 containerd[1508]: time="2025-05-13T23:57:27.297588561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 23:57:27.310238 containerd[1508]: time="2025-05-13T23:57:27.310192453Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:57:27.369428 containerd[1508]: time="2025-05-13T23:57:27.369360723Z" level=info msg="Container 6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:27.426849 containerd[1508]: time="2025-05-13T23:57:27.426797319Z" level=info msg="CreateContainer within sandbox \"8a5b6f0323fbefd25969adcc79c32706bdafeb541c1e6da3fdb5e25a4f3ceb6a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\"" May 13 23:57:27.428291 containerd[1508]: time="2025-05-13T23:57:27.428247433Z" level=info msg="StartContainer for \"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\"" May 13 23:57:27.429904 containerd[1508]: time="2025-05-13T23:57:27.429868598Z" level=info msg="connecting to shim 6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503" address="unix:///run/containerd/s/98779629642c115391577e141f02514ec192d8ebfd48e12a2ee1550b2fc41076" protocol=ttrpc version=3 May 13 23:57:27.451234 systemd[1]: Started cri-containerd-6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503.scope - libcontainer container 6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503. May 13 23:57:27.516035 containerd[1508]: time="2025-05-13T23:57:27.515928024Z" level=info msg="StartContainer for \"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" returns successfully" May 13 23:57:27.586323 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:57:27.586543 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:57:27.630629 kubelet[2618]: I0513 23:57:27.630587 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:57:27.777936 kubelet[2618]: I0513 23:57:27.777843 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9mz9r" podStartSLOduration=1.98166274 podStartE2EDuration="30.777824465s" podCreationTimestamp="2025-05-13 23:56:57 +0000 UTC" firstStartedPulling="2025-05-13 23:56:58.502263638 +0000 UTC m=+14.384844757" lastFinishedPulling="2025-05-13 23:57:27.298425363 +0000 UTC m=+43.181006482" observedRunningTime="2025-05-13 23:57:27.776542277 +0000 UTC m=+43.659123396" watchObservedRunningTime="2025-05-13 23:57:27.777824465 +0000 UTC m=+43.660405584" May 13 23:57:27.782564 containerd[1508]: time="2025-05-13T23:57:27.782488124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" id:\"34ee8ca18ade8c66f9069be2c1e860a1448c3e2278c5dd23e191e7f8e81e9068\" pid:3693 exit_status:1 exited_at:{seconds:1747180647 nanos:781771418}" May 13 23:57:28.771340 containerd[1508]: time="2025-05-13T23:57:28.771228672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" id:\"3b7ace54fd0cf1d2b0193bf3a95b5c0c89e5f5e5734ccb3c984083f7831be9ed\" pid:3744 exit_status:1 exited_at:{seconds:1747180648 nanos:770767095}" May 13 23:57:29.245135 kernel: bpftool[3883]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:57:29.507028 systemd-networkd[1441]: vxlan.calico: Link UP May 13 23:57:29.507043 systemd-networkd[1441]: vxlan.calico: Gained carrier May 13 23:57:31.151430 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL May 13 23:57:31.305196 containerd[1508]: time="2025-05-13T23:57:31.305136533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9kl4,Uid:8e973511-c167-487d-8439-3dff1098a9bc,Namespace:kube-system,Attempt:0,}" May 13 23:57:32.219488 systemd-networkd[1441]: cali9d2b333735c: Link UP May 13 23:57:32.219703 systemd-networkd[1441]: cali9d2b333735c: Gained carrier May 13 23:57:32.233651 containerd[1508]: 2025-05-13 23:57:32.070 [INFO][3959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0 coredns-6f6b679f8f- kube-system 8e973511-c167-487d-8439-3dff1098a9bc 702 0 2025-05-13 23:56:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-q9kl4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d2b333735c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-" May 13 23:57:32.233651 containerd[1508]: 2025-05-13 23:57:32.071 [INFO][3959] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.233651 containerd[1508]: 2025-05-13 23:57:32.150 [INFO][3973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" HandleID="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Workload="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.165 [INFO][3973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" HandleID="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Workload="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c60e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-q9kl4", "timestamp":"2025-05-13 23:57:32.15074188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.165 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.166 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.166 [INFO][3973] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.168 [INFO][3973] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" host="localhost" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.179 [INFO][3973] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.185 [INFO][3973] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.187 [INFO][3973] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.192 [INFO][3973] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.233949 containerd[1508]: 2025-05-13 23:57:32.192 [INFO][3973] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" host="localhost" May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.193 [INFO][3973] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.200 [INFO][3973] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" host="localhost" May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.207 [INFO][3973] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" host="localhost" May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.207 [INFO][3973] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" host="localhost" May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.208 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:32.234496 containerd[1508]: 2025-05-13 23:57:32.208 [INFO][3973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" HandleID="k8s-pod-network.191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Workload="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.234824 containerd[1508]: 2025-05-13 23:57:32.212 [INFO][3959] cni-plugin/k8s.go 386: Populated endpoint ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e973511-c167-487d-8439-3dff1098a9bc", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-q9kl4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d2b333735c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.234901 containerd[1508]: 2025-05-13 23:57:32.212 [INFO][3959] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.234901 containerd[1508]: 2025-05-13 23:57:32.212 [INFO][3959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d2b333735c ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.234901 containerd[1508]: 2025-05-13 23:57:32.220 [INFO][3959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.235012 containerd[1508]: 2025-05-13 23:57:32.221 [INFO][3959] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e973511-c167-487d-8439-3dff1098a9bc", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d", Pod:"coredns-6f6b679f8f-q9kl4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d2b333735c", MAC:"b6:5b:53:2c:1a:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.235012 containerd[1508]: 2025-05-13 23:57:32.229 [INFO][3959] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" Namespace="kube-system" Pod="coredns-6f6b679f8f-q9kl4" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--q9kl4-eth0" May 13 23:57:32.305593 containerd[1508]: time="2025-05-13T23:57:32.305320784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-hv86l,Uid:dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5,Namespace:calico-apiserver,Attempt:0,}" May 13 23:57:32.305593 containerd[1508]: time="2025-05-13T23:57:32.305380055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f549c-24cj7,Uid:86a2180a-f386-40eb-b31a-9655eeb5faa7,Namespace:calico-system,Attempt:0,}" May 13 23:57:32.305593 containerd[1508]: time="2025-05-13T23:57:32.305589468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m2jkb,Uid:8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e,Namespace:kube-system,Attempt:0,}" May 13 23:57:32.462435 containerd[1508]: time="2025-05-13T23:57:32.462376016Z" level=info msg="connecting to shim 191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d" address="unix:///run/containerd/s/2799999591030607c010f0f1c0d118195aa1ec63ebd9066673478db990dc603c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:32.478358 systemd-networkd[1441]: cali9659fddd597: Link UP May 13 23:57:32.480497 systemd-networkd[1441]: cali9659fddd597: Gained carrier May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.381 [INFO][4020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0 coredns-6f6b679f8f- kube-system 8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e 696 0 2025-05-13 23:56:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-m2jkb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9659fddd597 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.381 [INFO][4020] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.424 [INFO][4045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" HandleID="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Workload="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.437 [INFO][4045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" HandleID="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Workload="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-m2jkb", "timestamp":"2025-05-13 23:57:32.424276277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.437 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.437 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.437 [INFO][4045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.439 [INFO][4045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.444 [INFO][4045] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.448 [INFO][4045] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.450 [INFO][4045] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.451 [INFO][4045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.451 [INFO][4045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.453 [INFO][4045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.457 [INFO][4045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.464 [INFO][4045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.464 [INFO][4045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" host="localhost" May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.465 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:32.503195 containerd[1508]: 2025-05-13 23:57:32.465 [INFO][4045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" HandleID="k8s-pod-network.4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Workload="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.472 [INFO][4020] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-m2jkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9659fddd597", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.472 [INFO][4020] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.472 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9659fddd597 ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.479 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.481 [INFO][4020] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca", Pod:"coredns-6f6b679f8f-m2jkb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9659fddd597", MAC:"8e:c0:d0:96:23:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.503890 containerd[1508]: 2025-05-13 23:57:32.499 [INFO][4020] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" Namespace="kube-system" Pod="coredns-6f6b679f8f-m2jkb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--m2jkb-eth0" May 13 23:57:32.537300 systemd[1]: Started cri-containerd-191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d.scope - libcontainer container 191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d. May 13 23:57:32.557225 containerd[1508]: time="2025-05-13T23:57:32.556785680Z" level=info msg="connecting to shim 4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca" address="unix:///run/containerd/s/e63cc5aeb86c4b4d237e3083c97d9a3bb75cff095274306ee690e488b520aa7e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:32.557696 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:32.595255 systemd[1]: Started cri-containerd-4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca.scope - libcontainer container 4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca. May 13 23:57:32.601962 systemd-networkd[1441]: cali0f64ecdfa89: Link UP May 13 23:57:32.602244 systemd-networkd[1441]: cali0f64ecdfa89: Gained carrier May 13 23:57:32.613973 containerd[1508]: time="2025-05-13T23:57:32.613636609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9kl4,Uid:8e973511-c167-487d-8439-3dff1098a9bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d\"" May 13 23:57:32.617380 containerd[1508]: time="2025-05-13T23:57:32.617346404Z" level=info msg="CreateContainer within sandbox \"191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:57:32.618831 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.380 [INFO][3997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0 calico-apiserver-7fd4c66cdb- calico-apiserver dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5 698 0 2025-05-13 23:56:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd4c66cdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fd4c66cdb-hv86l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f64ecdfa89 [] []}} ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.380 [INFO][3997] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.440 [INFO][4052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" HandleID="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.447 [INFO][4052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" HandleID="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fd4c66cdb-hv86l", "timestamp":"2025-05-13 23:57:32.4407662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.447 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.465 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.465 [INFO][4052] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.540 [INFO][4052] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.546 [INFO][4052] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.552 [INFO][4052] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.554 [INFO][4052] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.557 [INFO][4052] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.557 [INFO][4052] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.559 [INFO][4052] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5 May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.566 [INFO][4052] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.576 [INFO][4052] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.577 [INFO][4052] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" host="localhost" May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.577 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:32.627157 containerd[1508]: 2025-05-13 23:57:32.577 [INFO][4052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" HandleID="k8s-pod-network.360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.586 [INFO][3997] cni-plugin/k8s.go 386: Populated endpoint ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0", GenerateName:"calico-apiserver-7fd4c66cdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd4c66cdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fd4c66cdb-hv86l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f64ecdfa89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.586 [INFO][3997] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.586 [INFO][3997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f64ecdfa89 ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.602 [INFO][3997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.603 [INFO][3997] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0", GenerateName:"calico-apiserver-7fd4c66cdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd4c66cdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5", Pod:"calico-apiserver-7fd4c66cdb-hv86l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f64ecdfa89", MAC:"72:4e:59:ab:58:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.627857 containerd[1508]: 2025-05-13 23:57:32.616 [INFO][3997] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-hv86l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--hv86l-eth0" May 13 23:57:32.631764 containerd[1508]: time="2025-05-13T23:57:32.631676242Z" level=info msg="Container 15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:32.640844 containerd[1508]: time="2025-05-13T23:57:32.640799953Z" level=info msg="CreateContainer within sandbox \"191139b05d9fe27f1aeea662a651159635d8db8216abe6f4e1364278d555e56d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d\"" May 13 23:57:32.642444 containerd[1508]: time="2025-05-13T23:57:32.642422511Z" level=info msg="StartContainer for \"15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d\"" May 13 23:57:32.649626 containerd[1508]: time="2025-05-13T23:57:32.649490092Z" level=info msg="connecting to shim 15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d" address="unix:///run/containerd/s/2799999591030607c010f0f1c0d118195aa1ec63ebd9066673478db990dc603c" protocol=ttrpc version=3 May 13 23:57:32.661627 containerd[1508]: time="2025-05-13T23:57:32.661524358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m2jkb,Uid:8e2a314a-57a9-4fe0-a76d-c30ae0bf3c1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca\"" May 13 23:57:32.665642 containerd[1508]: time="2025-05-13T23:57:32.665527505Z" level=info msg="CreateContainer within sandbox \"4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:57:32.680278 systemd[1]: Started cri-containerd-15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d.scope - libcontainer container 15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d. May 13 23:57:32.685041 containerd[1508]: time="2025-05-13T23:57:32.684980512Z" level=info msg="connecting to shim 360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5" address="unix:///run/containerd/s/85c70be7e678d3b2b13fab38da8d83d7f8a76087c5f61d317b63d61d4ff2bf8a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:32.699255 systemd-networkd[1441]: cali1dda90fb1a3: Link UP May 13 23:57:32.701555 systemd-networkd[1441]: cali1dda90fb1a3: Gained carrier May 13 23:57:32.711743 containerd[1508]: time="2025-05-13T23:57:32.711699202Z" level=info msg="Container ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:32.725312 containerd[1508]: time="2025-05-13T23:57:32.725268851Z" level=info msg="CreateContainer within sandbox \"4fb1b212307c0c5ff415adfd674fdd688f58ba4d672fb1aa8b1b1d1c77d1b4ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950\"" May 13 23:57:32.728731 containerd[1508]: time="2025-05-13T23:57:32.728677592Z" level=info msg="StartContainer for \"ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950\"" May 13 23:57:32.733701 containerd[1508]: time="2025-05-13T23:57:32.733613149Z" level=info msg="StartContainer for \"15247b346fb760172b6cff0e03c8bc199584a1c3beba9b67d481231e9f91601d\" returns successfully" May 13 23:57:32.733901 containerd[1508]: time="2025-05-13T23:57:32.733774232Z" level=info msg="connecting to shim ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950" address="unix:///run/containerd/s/e63cc5aeb86c4b4d237e3083c97d9a3bb75cff095274306ee690e488b520aa7e" protocol=ttrpc version=3 May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.388 [INFO][4007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0 calico-kube-controllers-94f549c- calico-system 86a2180a-f386-40eb-b31a-9655eeb5faa7 703 0 2025-05-13 23:56:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:94f549c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-94f549c-24cj7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1dda90fb1a3 [] []}} ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.388 [INFO][4007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.461 [INFO][4058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" HandleID="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Workload="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.540 [INFO][4058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" HandleID="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Workload="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000500d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-94f549c-24cj7", "timestamp":"2025-05-13 23:57:32.461858495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.540 [INFO][4058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.581 [INFO][4058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.581 [INFO][4058] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.643 [INFO][4058] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.651 [INFO][4058] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.657 [INFO][4058] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.660 [INFO][4058] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.663 [INFO][4058] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.663 [INFO][4058] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.665 [INFO][4058] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9 May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.671 [INFO][4058] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.679 [INFO][4058] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.679 [INFO][4058] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" host="localhost" May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.679 [INFO][4058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:32.735682 containerd[1508]: 2025-05-13 23:57:32.679 [INFO][4058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" HandleID="k8s-pod-network.43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Workload="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.687 [INFO][4007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0", GenerateName:"calico-kube-controllers-94f549c-", Namespace:"calico-system", SelfLink:"", UID:"86a2180a-f386-40eb-b31a-9655eeb5faa7", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94f549c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-94f549c-24cj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1dda90fb1a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.688 [INFO][4007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.688 [INFO][4007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1dda90fb1a3 ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.704 [INFO][4007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.704 [INFO][4007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0", GenerateName:"calico-kube-controllers-94f549c-", Namespace:"calico-system", SelfLink:"", UID:"86a2180a-f386-40eb-b31a-9655eeb5faa7", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"94f549c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9", Pod:"calico-kube-controllers-94f549c-24cj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1dda90fb1a3", MAC:"ae:e0:47:50:63:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:32.737165 containerd[1508]: 2025-05-13 23:57:32.721 [INFO][4007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" Namespace="calico-system" Pod="calico-kube-controllers-94f549c-24cj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--94f549c--24cj7-eth0" May 13 23:57:32.736312 systemd[1]: Started cri-containerd-360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5.scope - libcontainer container 360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5. May 13 23:57:32.762222 systemd[1]: Started cri-containerd-ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950.scope - libcontainer container ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950. May 13 23:57:32.768104 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:32.772055 containerd[1508]: time="2025-05-13T23:57:32.771996722Z" level=info msg="connecting to shim 43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9" address="unix:///run/containerd/s/21cbc7e2e280c76d1467894700d03322f4b98e5a9e3f358413a28bab8e1a4ec5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:32.811464 systemd[1]: Started cri-containerd-43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9.scope - libcontainer container 43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9. May 13 23:57:32.821934 containerd[1508]: time="2025-05-13T23:57:32.821842999Z" level=info msg="StartContainer for \"ff3c3e87b5625953ba606ef838d8273d21ff51c2eaff98dfae6b115bb8477950\" returns successfully" May 13 23:57:32.831504 containerd[1508]: time="2025-05-13T23:57:32.831462071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-hv86l,Uid:dc39f3d9-d309-4c9a-aed4-69ee89dd9ff5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5\"" May 13 23:57:32.835525 containerd[1508]: time="2025-05-13T23:57:32.835477430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:57:32.851271 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:32.907686 containerd[1508]: time="2025-05-13T23:57:32.907628750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-94f549c-24cj7,Uid:86a2180a-f386-40eb-b31a-9655eeb5faa7,Namespace:calico-system,Attempt:0,} returns sandbox id \"43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9\"" May 13 23:57:33.507694 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:56454.service - OpenSSH per-connection server daemon (10.0.0.1:56454). May 13 23:57:33.520246 systemd-networkd[1441]: cali9d2b333735c: Gained IPv6LL May 13 23:57:33.565127 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 56454 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:33.566667 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:33.570946 systemd-logind[1488]: New session 8 of user core. May 13 23:57:33.578222 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:57:33.717514 sshd[4365]: Connection closed by 10.0.0.1 port 56454 May 13 23:57:33.720208 sshd-session[4363]: pam_unix(sshd:session): session closed for user core May 13 23:57:33.725247 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:56454.service: Deactivated successfully. May 13 23:57:33.727745 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:57:33.728551 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. May 13 23:57:33.729572 systemd-logind[1488]: Removed session 8. May 13 23:57:33.731359 kubelet[2618]: I0513 23:57:33.731298 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-m2jkb" podStartSLOduration=45.731281718 podStartE2EDuration="45.731281718s" podCreationTimestamp="2025-05-13 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:33.730990521 +0000 UTC m=+49.613571640" watchObservedRunningTime="2025-05-13 23:57:33.731281718 +0000 UTC m=+49.613862837" May 13 23:57:34.224736 systemd-networkd[1441]: cali9659fddd597: Gained IPv6LL May 13 23:57:34.415276 systemd-networkd[1441]: cali0f64ecdfa89: Gained IPv6LL May 13 23:57:34.735979 kubelet[2618]: I0513 23:57:34.735912 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q9kl4" podStartSLOduration=46.735892825 podStartE2EDuration="46.735892825s" podCreationTimestamp="2025-05-13 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:33.74164673 +0000 UTC m=+49.624227849" watchObservedRunningTime="2025-05-13 23:57:34.735892825 +0000 UTC m=+50.618473944" May 13 23:57:34.737037 systemd-networkd[1441]: cali1dda90fb1a3: Gained IPv6LL May 13 23:57:34.941795 containerd[1508]: time="2025-05-13T23:57:34.941746366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.942667 containerd[1508]: time="2025-05-13T23:57:34.942627711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 23:57:34.943661 containerd[1508]: time="2025-05-13T23:57:34.943624673Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.949559 containerd[1508]: time="2025-05-13T23:57:34.949491718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:34.950713 containerd[1508]: time="2025-05-13T23:57:34.950678947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.115155631s" May 13 23:57:34.950713 containerd[1508]: time="2025-05-13T23:57:34.950706770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 23:57:34.951736 containerd[1508]: time="2025-05-13T23:57:34.951505148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:57:34.952640 containerd[1508]: time="2025-05-13T23:57:34.952591478Z" level=info msg="CreateContainer within sandbox \"360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:57:34.961208 containerd[1508]: time="2025-05-13T23:57:34.961153413Z" level=info msg="Container 26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:34.968369 containerd[1508]: time="2025-05-13T23:57:34.968331370Z" level=info msg="CreateContainer within sandbox \"360b370a9dd8d6c3de821bf1428c6042c3b91e3de5505c685d98c25181f200a5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548\"" May 13 23:57:34.968879 containerd[1508]: time="2025-05-13T23:57:34.968846928Z" level=info msg="StartContainer for \"26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548\"" May 13 23:57:34.970050 containerd[1508]: time="2025-05-13T23:57:34.970000554Z" level=info msg="connecting to shim 26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548" address="unix:///run/containerd/s/85c70be7e678d3b2b13fab38da8d83d7f8a76087c5f61d317b63d61d4ff2bf8a" protocol=ttrpc version=3 May 13 23:57:34.993290 systemd[1]: Started cri-containerd-26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548.scope - libcontainer container 26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548. May 13 23:57:35.212975 containerd[1508]: time="2025-05-13T23:57:35.212912438Z" level=info msg="StartContainer for \"26a472c85a5b214a07f10bf47614d8dfe8789e397b37b860daa0bceda0cab548\" returns successfully" May 13 23:57:35.304810 containerd[1508]: time="2025-05-13T23:57:35.304648922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-9pqzz,Uid:16e369ef-efb8-44a3-835c-07ff87a832ee,Namespace:calico-apiserver,Attempt:0,}" May 13 23:57:35.304810 containerd[1508]: time="2025-05-13T23:57:35.304772574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlkmq,Uid:7c772b54-e083-47fe-b4ff-69706c6198e1,Namespace:calico-system,Attempt:0,}" May 13 23:57:35.701591 systemd-networkd[1441]: calid4ec0d7cc8f: Link UP May 13 23:57:35.701873 systemd-networkd[1441]: calid4ec0d7cc8f: Gained carrier May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.578 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0 calico-apiserver-7fd4c66cdb- calico-apiserver 16e369ef-efb8-44a3-835c-07ff87a832ee 701 0 2025-05-13 23:56:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd4c66cdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fd4c66cdb-9pqzz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4ec0d7cc8f [] []}} ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.579 [INFO][4440] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.627 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" HandleID="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.642 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" HandleID="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fd4c66cdb-9pqzz", "timestamp":"2025-05-13 23:57:35.627769995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.642 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.642 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.642 [INFO][4454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.646 [INFO][4454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.654 [INFO][4454] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.663 [INFO][4454] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.670 [INFO][4454] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.674 [INFO][4454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.674 [INFO][4454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.677 [INFO][4454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.683 [INFO][4454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.691 [INFO][4454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.691 [INFO][4454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" host="localhost" May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.691 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:35.724181 containerd[1508]: 2025-05-13 23:57:35.691 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" HandleID="k8s-pod-network.e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Workload="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.696 [INFO][4440] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0", GenerateName:"calico-apiserver-7fd4c66cdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e369ef-efb8-44a3-835c-07ff87a832ee", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd4c66cdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fd4c66cdb-9pqzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ec0d7cc8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.696 [INFO][4440] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.696 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4ec0d7cc8f ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.700 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.700 [INFO][4440] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0", GenerateName:"calico-apiserver-7fd4c66cdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16e369ef-efb8-44a3-835c-07ff87a832ee", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd4c66cdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f", Pod:"calico-apiserver-7fd4c66cdb-9pqzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ec0d7cc8f", MAC:"86:26:28:01:7b:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:35.725038 containerd[1508]: 2025-05-13 23:57:35.717 [INFO][4440] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" Namespace="calico-apiserver" Pod="calico-apiserver-7fd4c66cdb-9pqzz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd4c66cdb--9pqzz-eth0" May 13 23:57:35.747561 kubelet[2618]: I0513 23:57:35.747492 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-hv86l" podStartSLOduration=36.629338843 podStartE2EDuration="38.747473006s" podCreationTimestamp="2025-05-13 23:56:57 +0000 UTC" firstStartedPulling="2025-05-13 23:57:32.833287298 +0000 UTC m=+48.715868427" lastFinishedPulling="2025-05-13 23:57:34.951421471 +0000 UTC m=+50.834002590" observedRunningTime="2025-05-13 23:57:35.74681516 +0000 UTC m=+51.629396279" watchObservedRunningTime="2025-05-13 23:57:35.747473006 +0000 UTC m=+51.630054125" May 13 23:57:35.780703 containerd[1508]: time="2025-05-13T23:57:35.779922058Z" level=info msg="connecting to shim e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f" address="unix:///run/containerd/s/01f089caf10a38f303905203ec4b777ae4a1d275046645ac494154a678f76e27" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:35.810263 systemd[1]: Started cri-containerd-e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f.scope - libcontainer container e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f. May 13 23:57:35.825603 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:35.897801 containerd[1508]: time="2025-05-13T23:57:35.897133219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd4c66cdb-9pqzz,Uid:16e369ef-efb8-44a3-835c-07ff87a832ee,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f\"" May 13 23:57:35.904348 containerd[1508]: time="2025-05-13T23:57:35.904301508Z" level=info msg="CreateContainer within sandbox \"e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:57:35.924871 systemd-networkd[1441]: cali19d410c1851: Link UP May 13 23:57:35.925223 systemd-networkd[1441]: cali19d410c1851: Gained carrier May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.672 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vlkmq-eth0 csi-node-driver- calico-system 7c772b54-e083-47fe-b4ff-69706c6198e1 589 0 2025-05-13 23:56:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vlkmq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19d410c1851 [] []}} ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.672 [INFO][4463] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.737 [INFO][4477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" HandleID="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Workload="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.852 [INFO][4477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" HandleID="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Workload="localhost-k8s-csi--node--driver--vlkmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002881a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vlkmq", "timestamp":"2025-05-13 23:57:35.737157098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.852 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.852 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.852 [INFO][4477] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.860 [INFO][4477] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.866 [INFO][4477] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.890 [INFO][4477] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.893 [INFO][4477] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.897 [INFO][4477] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.897 [INFO][4477] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.903 [INFO][4477] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442 May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.909 [INFO][4477] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.917 [INFO][4477] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.917 [INFO][4477] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" host="localhost" May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.917 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:57:35.947204 containerd[1508]: 2025-05-13 23:57:35.917 [INFO][4477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" HandleID="k8s-pod-network.9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Workload="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.920 [INFO][4463] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vlkmq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c772b54-e083-47fe-b4ff-69706c6198e1", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vlkmq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19d410c1851", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.920 [INFO][4463] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.920 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19d410c1851 ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.925 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.925 [INFO][4463] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vlkmq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c772b54-e083-47fe-b4ff-69706c6198e1", ResourceVersion:"589", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 56, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442", Pod:"csi-node-driver-vlkmq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19d410c1851", MAC:"8e:f1:6a:a1:02:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:57:35.949719 containerd[1508]: 2025-05-13 23:57:35.940 [INFO][4463] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" Namespace="calico-system" Pod="csi-node-driver-vlkmq" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlkmq-eth0" May 13 23:57:35.957257 containerd[1508]: time="2025-05-13T23:57:35.957121546Z" level=info msg="Container 7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:35.996951 containerd[1508]: time="2025-05-13T23:57:35.996896523Z" level=info msg="CreateContainer within sandbox \"e610dbde114301743e287f9c023b59b429ec44b14efb45fb0980c26fbbf9004f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea\"" May 13 23:57:35.997892 containerd[1508]: time="2025-05-13T23:57:35.997837820Z" level=info msg="StartContainer for \"7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea\"" May 13 23:57:35.999278 containerd[1508]: time="2025-05-13T23:57:35.999239823Z" level=info msg="connecting to shim 7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea" address="unix:///run/containerd/s/01f089caf10a38f303905203ec4b777ae4a1d275046645ac494154a678f76e27" protocol=ttrpc version=3 May 13 23:57:36.022021 containerd[1508]: time="2025-05-13T23:57:36.021952974Z" level=info msg="connecting to shim 9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442" address="unix:///run/containerd/s/d1ca2d662dd01e937727bc79c7c6b060ef49e4dcc730484ecc324eb9c41a011f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:36.036587 systemd[1]: Started cri-containerd-7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea.scope - libcontainer container 7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea. May 13 23:57:36.068506 systemd[1]: Started cri-containerd-9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442.scope - libcontainer container 9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442. May 13 23:57:36.095321 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:57:36.120822 containerd[1508]: time="2025-05-13T23:57:36.120595588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlkmq,Uid:7c772b54-e083-47fe-b4ff-69706c6198e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442\"" May 13 23:57:36.142772 containerd[1508]: time="2025-05-13T23:57:36.142617442Z" level=info msg="StartContainer for \"7daf6377bfc6702474e7c7374e247e0b570ef6efd248b572df1f91c2de4250ea\" returns successfully" May 13 23:57:36.737007 kubelet[2618]: I0513 23:57:36.736665 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:57:36.763609 kubelet[2618]: I0513 23:57:36.763415 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fd4c66cdb-9pqzz" podStartSLOduration=39.76339626 podStartE2EDuration="39.76339626s" podCreationTimestamp="2025-05-13 23:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:36.762891112 +0000 UTC m=+52.645472241" watchObservedRunningTime="2025-05-13 23:57:36.76339626 +0000 UTC m=+52.645977379" May 13 23:57:36.787243 systemd-networkd[1441]: calid4ec0d7cc8f: Gained IPv6LL May 13 23:57:37.359295 systemd-networkd[1441]: cali19d410c1851: Gained IPv6LL May 13 23:57:37.738772 kubelet[2618]: I0513 23:57:37.738736 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:57:38.622845 containerd[1508]: time="2025-05-13T23:57:38.622723231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:38.645448 containerd[1508]: time="2025-05-13T23:57:38.645247807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 23:57:38.693319 containerd[1508]: time="2025-05-13T23:57:38.693178988Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:38.727017 containerd[1508]: time="2025-05-13T23:57:38.726944877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:38.727916 containerd[1508]: time="2025-05-13T23:57:38.727859743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.776326312s" May 13 23:57:38.727916 containerd[1508]: time="2025-05-13T23:57:38.727915940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 23:57:38.729699 containerd[1508]: time="2025-05-13T23:57:38.729029980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:57:38.742480 containerd[1508]: time="2025-05-13T23:57:38.742434856Z" level=info msg="CreateContainer within sandbox \"43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:57:38.747929 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:43626.service - OpenSSH per-connection server daemon (10.0.0.1:43626). May 13 23:57:38.831910 containerd[1508]: time="2025-05-13T23:57:38.831650741Z" level=info msg="Container 13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:38.848639 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 43626 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:38.850705 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:38.857335 systemd-logind[1488]: New session 9 of user core. May 13 23:57:38.868336 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:57:38.910527 containerd[1508]: time="2025-05-13T23:57:38.910379347Z" level=info msg="CreateContainer within sandbox \"43d00364aec22e62e434a7216cdda26c75fc652a4e5281d433785e34ebef3ee9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\"" May 13 23:57:38.912904 containerd[1508]: time="2025-05-13T23:57:38.910956620Z" level=info msg="StartContainer for \"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\"" May 13 23:57:38.912904 containerd[1508]: time="2025-05-13T23:57:38.912444825Z" level=info msg="connecting to shim 13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6" address="unix:///run/containerd/s/21cbc7e2e280c76d1467894700d03322f4b98e5a9e3f358413a28bab8e1a4ec5" protocol=ttrpc version=3 May 13 23:57:38.944360 systemd[1]: Started cri-containerd-13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6.scope - libcontainer container 13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6. May 13 23:57:39.019902 containerd[1508]: time="2025-05-13T23:57:39.019856367Z" level=info msg="StartContainer for \"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\" returns successfully" May 13 23:57:39.062407 sshd[4649]: Connection closed by 10.0.0.1 port 43626 May 13 23:57:39.063140 sshd-session[4647]: pam_unix(sshd:session): session closed for user core May 13 23:57:39.068335 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:43626.service: Deactivated successfully. May 13 23:57:39.071220 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:57:39.072485 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. May 13 23:57:39.074577 systemd-logind[1488]: Removed session 9. May 13 23:57:39.783868 kubelet[2618]: I0513 23:57:39.783284 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-94f549c-24cj7" podStartSLOduration=35.964208028 podStartE2EDuration="41.783262173s" podCreationTimestamp="2025-05-13 23:56:58 +0000 UTC" firstStartedPulling="2025-05-13 23:57:32.909886919 +0000 UTC m=+48.792468038" lastFinishedPulling="2025-05-13 23:57:38.728941063 +0000 UTC m=+54.611522183" observedRunningTime="2025-05-13 23:57:39.782968792 +0000 UTC m=+55.665549931" watchObservedRunningTime="2025-05-13 23:57:39.783262173 +0000 UTC m=+55.665843292" May 13 23:57:39.839445 containerd[1508]: time="2025-05-13T23:57:39.839279922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\" id:\"a0025ffd399fbceff7d7adbecab16df91d779144009d006c3243d73951c17789\" pid:4710 exited_at:{seconds:1747180659 nanos:836008260}" May 13 23:57:40.683137 containerd[1508]: time="2025-05-13T23:57:40.681970039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:40.686055 containerd[1508]: time="2025-05-13T23:57:40.685957074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 23:57:40.688495 containerd[1508]: time="2025-05-13T23:57:40.688410589Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:40.692375 containerd[1508]: time="2025-05-13T23:57:40.692249064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:40.692994 containerd[1508]: time="2025-05-13T23:57:40.692921878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.963850288s" May 13 23:57:40.692994 containerd[1508]: time="2025-05-13T23:57:40.692983774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 23:57:40.698139 containerd[1508]: time="2025-05-13T23:57:40.696631281Z" level=info msg="CreateContainer within sandbox \"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:57:40.723445 containerd[1508]: time="2025-05-13T23:57:40.723316203Z" level=info msg="Container 044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:40.752742 containerd[1508]: time="2025-05-13T23:57:40.752667512Z" level=info msg="CreateContainer within sandbox \"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe\"" May 13 23:57:40.753300 containerd[1508]: time="2025-05-13T23:57:40.753244074Z" level=info msg="StartContainer for \"044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe\"" May 13 23:57:40.757193 containerd[1508]: time="2025-05-13T23:57:40.757037735Z" level=info msg="connecting to shim 044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe" address="unix:///run/containerd/s/d1ca2d662dd01e937727bc79c7c6b060ef49e4dcc730484ecc324eb9c41a011f" protocol=ttrpc version=3 May 13 23:57:40.789331 systemd[1]: Started cri-containerd-044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe.scope - libcontainer container 044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe. May 13 23:57:41.671697 containerd[1508]: time="2025-05-13T23:57:41.671634248Z" level=info msg="StartContainer for \"044a31e1b43f7c53b140fde1798a5d02aca8961f1de65ec29ca1d46229fbcbbe\" returns successfully" May 13 23:57:41.675790 containerd[1508]: time="2025-05-13T23:57:41.675568265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:57:44.075334 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:43636.service - OpenSSH per-connection server daemon (10.0.0.1:43636). May 13 23:57:44.562135 containerd[1508]: time="2025-05-13T23:57:44.562054906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" id:\"99daafe8daa9214e1112f0fef07393de3cd58590337a412a3d2ec180f4d33fad\" pid:4775 exited_at:{seconds:1747180664 nanos:561659385}" May 13 23:57:44.610782 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:44.613377 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:44.620440 systemd-logind[1488]: New session 10 of user core. May 13 23:57:44.632323 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:57:44.736975 containerd[1508]: time="2025-05-13T23:57:44.736897772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:44.797107 containerd[1508]: time="2025-05-13T23:57:44.794960377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 23:57:44.861186 containerd[1508]: time="2025-05-13T23:57:44.861036911Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:44.914605 containerd[1508]: time="2025-05-13T23:57:44.914536210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:44.915556 containerd[1508]: time="2025-05-13T23:57:44.915496499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.239876876s" May 13 23:57:44.915556 containerd[1508]: time="2025-05-13T23:57:44.915547768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 23:57:44.917590 containerd[1508]: time="2025-05-13T23:57:44.917546706Z" level=info msg="CreateContainer within sandbox \"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:57:45.127219 containerd[1508]: time="2025-05-13T23:57:45.126993832Z" level=info msg="Container 2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:45.200643 sshd[4794]: Connection closed by 10.0.0.1 port 43636 May 13 23:57:45.201050 sshd-session[4787]: pam_unix(sshd:session): session closed for user core May 13 23:57:45.205809 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:43636.service: Deactivated successfully. May 13 23:57:45.208072 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:57:45.209283 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. May 13 23:57:45.210985 systemd-logind[1488]: Removed session 10. May 13 23:57:45.317886 containerd[1508]: time="2025-05-13T23:57:45.317808251Z" level=info msg="CreateContainer within sandbox \"9cd55489fc52b4af48c079285e697ad92d36ca17d81b7ca66fb86e638cba1442\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae\"" May 13 23:57:45.318593 containerd[1508]: time="2025-05-13T23:57:45.318552532Z" level=info msg="StartContainer for \"2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae\"" May 13 23:57:45.320498 containerd[1508]: time="2025-05-13T23:57:45.320466595Z" level=info msg="connecting to shim 2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae" address="unix:///run/containerd/s/d1ca2d662dd01e937727bc79c7c6b060ef49e4dcc730484ecc324eb9c41a011f" protocol=ttrpc version=3 May 13 23:57:45.341378 systemd[1]: Started cri-containerd-2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae.scope - libcontainer container 2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae. May 13 23:57:45.518405 containerd[1508]: time="2025-05-13T23:57:45.518361204Z" level=info msg="StartContainer for \"2a71da2a867b695ba7cb67cc69bae7824761e4337ba050bf5283155251f79bae\" returns successfully" May 13 23:57:45.851798 kubelet[2618]: I0513 23:57:45.851341 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vlkmq" podStartSLOduration=40.057691235 podStartE2EDuration="48.851320124s" podCreationTimestamp="2025-05-13 23:56:57 +0000 UTC" firstStartedPulling="2025-05-13 23:57:36.12262099 +0000 UTC m=+52.005202109" lastFinishedPulling="2025-05-13 23:57:44.916249879 +0000 UTC m=+60.798830998" observedRunningTime="2025-05-13 23:57:45.850312525 +0000 UTC m=+61.732893634" watchObservedRunningTime="2025-05-13 23:57:45.851320124 +0000 UTC m=+61.733901253" May 13 23:57:46.489658 kubelet[2618]: I0513 23:57:46.489587 2618 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:57:46.489850 kubelet[2618]: I0513 23:57:46.489676 2618 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:57:50.039204 containerd[1508]: time="2025-05-13T23:57:50.039147446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\" id:\"5b5f975da0070de3c181964567460227242cf9f56b769d521bbf8bb1908309e5\" pid:4867 exited_at:{seconds:1747180670 nanos:38865815}" May 13 23:57:50.221620 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:55928.service - OpenSSH per-connection server daemon (10.0.0.1:55928). May 13 23:57:50.290620 sshd[4878]: Accepted publickey for core from 10.0.0.1 port 55928 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:50.293265 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:50.299751 systemd-logind[1488]: New session 11 of user core. May 13 23:57:50.313326 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:57:50.520688 sshd[4880]: Connection closed by 10.0.0.1 port 55928 May 13 23:57:50.522870 sshd-session[4878]: pam_unix(sshd:session): session closed for user core May 13 23:57:50.534647 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:55928.service: Deactivated successfully. May 13 23:57:50.545617 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:57:50.555877 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. May 13 23:57:50.559723 systemd-logind[1488]: Removed session 11. May 13 23:57:52.131106 kubelet[2618]: I0513 23:57:52.131024 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:57:55.536408 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:55934.service - OpenSSH per-connection server daemon (10.0.0.1:55934). May 13 23:57:55.589433 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 55934 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:55.591424 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:55.596028 systemd-logind[1488]: New session 12 of user core. May 13 23:57:55.605262 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:57:55.750973 sshd[4900]: Connection closed by 10.0.0.1 port 55934 May 13 23:57:55.751439 sshd-session[4898]: pam_unix(sshd:session): session closed for user core May 13 23:57:55.763057 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:55934.service: Deactivated successfully. May 13 23:57:55.765361 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:57:55.767059 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. May 13 23:57:55.769046 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:55950.service - OpenSSH per-connection server daemon (10.0.0.1:55950). May 13 23:57:55.771893 systemd-logind[1488]: Removed session 12. May 13 23:57:55.831310 sshd[4913]: Accepted publickey for core from 10.0.0.1 port 55950 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:55.833203 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:55.838554 systemd-logind[1488]: New session 13 of user core. May 13 23:57:55.847215 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:57:56.075970 sshd[4916]: Connection closed by 10.0.0.1 port 55950 May 13 23:57:56.076428 sshd-session[4913]: pam_unix(sshd:session): session closed for user core May 13 23:57:56.091396 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:55950.service: Deactivated successfully. May 13 23:57:56.095237 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:57:56.098242 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. May 13 23:57:56.102520 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:55956.service - OpenSSH per-connection server daemon (10.0.0.1:55956). May 13 23:57:56.104153 systemd-logind[1488]: Removed session 13. May 13 23:57:56.145197 sshd[4929]: Accepted publickey for core from 10.0.0.1 port 55956 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:57:56.146935 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:56.151523 systemd-logind[1488]: New session 14 of user core. May 13 23:57:56.161281 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:57:56.288248 sshd[4932]: Connection closed by 10.0.0.1 port 55956 May 13 23:57:56.288782 sshd-session[4929]: pam_unix(sshd:session): session closed for user core May 13 23:57:56.295453 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:55956.service: Deactivated successfully. May 13 23:57:56.297858 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:57:56.298640 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. May 13 23:57:56.299526 systemd-logind[1488]: Removed session 14. May 13 23:58:00.653878 kubelet[2618]: I0513 23:58:00.653433 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:58:01.304756 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:40430.service - OpenSSH per-connection server daemon (10.0.0.1:40430). May 13 23:58:01.392102 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 40430 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:01.398887 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:01.407055 systemd-logind[1488]: New session 15 of user core. May 13 23:58:01.412343 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:58:01.581845 sshd[4954]: Connection closed by 10.0.0.1 port 40430 May 13 23:58:01.583315 sshd-session[4952]: pam_unix(sshd:session): session closed for user core May 13 23:58:01.587053 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:40430.service: Deactivated successfully. May 13 23:58:01.589240 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:58:01.589933 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. May 13 23:58:01.590763 systemd-logind[1488]: Removed session 15. May 13 23:58:06.596410 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:33754.service - OpenSSH per-connection server daemon (10.0.0.1:33754). May 13 23:58:06.643515 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 33754 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:06.644949 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:06.649682 systemd-logind[1488]: New session 16 of user core. May 13 23:58:06.660209 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:58:06.773489 sshd[4973]: Connection closed by 10.0.0.1 port 33754 May 13 23:58:06.773846 sshd-session[4971]: pam_unix(sshd:session): session closed for user core May 13 23:58:06.778182 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:33754.service: Deactivated successfully. May 13 23:58:06.780489 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:58:06.781332 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. May 13 23:58:06.782226 systemd-logind[1488]: Removed session 16. May 13 23:58:11.787835 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:33758.service - OpenSSH per-connection server daemon (10.0.0.1:33758). May 13 23:58:11.832225 sshd[4992]: Accepted publickey for core from 10.0.0.1 port 33758 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:11.833619 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:11.838010 systemd-logind[1488]: New session 17 of user core. May 13 23:58:11.846218 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:58:11.960225 sshd[4994]: Connection closed by 10.0.0.1 port 33758 May 13 23:58:11.960575 sshd-session[4992]: pam_unix(sshd:session): session closed for user core May 13 23:58:11.964340 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:33758.service: Deactivated successfully. May 13 23:58:11.966937 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:58:11.967641 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. May 13 23:58:11.968622 systemd-logind[1488]: Removed session 17. May 13 23:58:14.089025 containerd[1508]: time="2025-05-13T23:58:14.087745274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" id:\"8e1bffbd9ff8e24d997471de879edea7e6afa79c7bdd11c304d65c2e22b94dff\" pid:5017 exited_at:{seconds:1747180694 nanos:87285911}" May 13 23:58:16.973940 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:44774.service - OpenSSH per-connection server daemon (10.0.0.1:44774). May 13 23:58:17.035234 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 44774 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:17.036852 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:17.042035 systemd-logind[1488]: New session 18 of user core. May 13 23:58:17.056258 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:58:17.196407 sshd[5034]: Connection closed by 10.0.0.1 port 44774 May 13 23:58:17.197389 sshd-session[5032]: pam_unix(sshd:session): session closed for user core May 13 23:58:17.201548 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:44774.service: Deactivated successfully. May 13 23:58:17.203792 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:58:17.204485 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. May 13 23:58:17.205364 systemd-logind[1488]: Removed session 18. May 13 23:58:20.035855 containerd[1508]: time="2025-05-13T23:58:20.035806981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\" id:\"4601303f6f00e7d7911fdec159c3a456f79bc9d5ff11319b64f899867ba542ee\" pid:5061 exited_at:{seconds:1747180700 nanos:35382857}" May 13 23:58:22.214072 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:44786.service - OpenSSH per-connection server daemon (10.0.0.1:44786). May 13 23:58:22.271010 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 44786 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:22.272665 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:22.280707 systemd-logind[1488]: New session 19 of user core. May 13 23:58:22.285258 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:58:22.418571 sshd[5074]: Connection closed by 10.0.0.1 port 44786 May 13 23:58:22.418926 sshd-session[5072]: pam_unix(sshd:session): session closed for user core May 13 23:58:22.433233 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:44786.service: Deactivated successfully. May 13 23:58:22.435437 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:58:22.437179 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. May 13 23:58:22.438829 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). May 13 23:58:22.440037 systemd-logind[1488]: Removed session 19. May 13 23:58:22.487745 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:22.489427 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:22.494475 systemd-logind[1488]: New session 20 of user core. May 13 23:58:22.506217 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:58:22.780660 sshd[5089]: Connection closed by 10.0.0.1 port 44800 May 13 23:58:22.781034 sshd-session[5086]: pam_unix(sshd:session): session closed for user core May 13 23:58:22.800187 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:44800.service: Deactivated successfully. May 13 23:58:22.802631 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:58:22.804255 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. May 13 23:58:22.805670 systemd[1]: Started sshd@20-10.0.0.86:22-10.0.0.1:44816.service - OpenSSH per-connection server daemon (10.0.0.1:44816). May 13 23:58:22.806704 systemd-logind[1488]: Removed session 20. May 13 23:58:22.861696 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 44816 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:22.863567 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:22.868777 systemd-logind[1488]: New session 21 of user core. May 13 23:58:22.881207 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:58:24.781542 sshd[5105]: Connection closed by 10.0.0.1 port 44816 May 13 23:58:24.783376 sshd-session[5102]: pam_unix(sshd:session): session closed for user core May 13 23:58:24.799064 systemd[1]: Started sshd@21-10.0.0.86:22-10.0.0.1:44828.service - OpenSSH per-connection server daemon (10.0.0.1:44828). May 13 23:58:24.799741 systemd[1]: sshd@20-10.0.0.86:22-10.0.0.1:44816.service: Deactivated successfully. May 13 23:58:24.806694 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:58:24.806983 systemd[1]: session-21.scope: Consumed 609ms CPU time, 70.3M memory peak. May 13 23:58:24.809568 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. May 13 23:58:24.814816 systemd-logind[1488]: Removed session 21. May 13 23:58:24.856273 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 44828 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:24.857846 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:24.862501 systemd-logind[1488]: New session 22 of user core. May 13 23:58:24.870237 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:58:25.198462 sshd[5127]: Connection closed by 10.0.0.1 port 44828 May 13 23:58:25.198113 sshd-session[5122]: pam_unix(sshd:session): session closed for user core May 13 23:58:25.213263 systemd[1]: sshd@21-10.0.0.86:22-10.0.0.1:44828.service: Deactivated successfully. May 13 23:58:25.215402 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:58:25.216422 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. May 13 23:58:25.218930 systemd[1]: Started sshd@22-10.0.0.86:22-10.0.0.1:44850.service - OpenSSH per-connection server daemon (10.0.0.1:44850). May 13 23:58:25.219891 systemd-logind[1488]: Removed session 22. May 13 23:58:25.260257 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 44850 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:25.261972 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:25.267103 systemd-logind[1488]: New session 23 of user core. May 13 23:58:25.285278 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:58:25.403987 sshd[5141]: Connection closed by 10.0.0.1 port 44850 May 13 23:58:25.404357 sshd-session[5138]: pam_unix(sshd:session): session closed for user core May 13 23:58:25.409067 systemd[1]: sshd@22-10.0.0.86:22-10.0.0.1:44850.service: Deactivated successfully. May 13 23:58:25.411474 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:58:25.412342 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. May 13 23:58:25.413413 systemd-logind[1488]: Removed session 23. May 13 23:58:30.418520 systemd[1]: Started sshd@23-10.0.0.86:22-10.0.0.1:53890.service - OpenSSH per-connection server daemon (10.0.0.1:53890). May 13 23:58:30.467511 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 53890 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:30.469293 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:30.474725 systemd-logind[1488]: New session 24 of user core. May 13 23:58:30.481223 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:58:30.602893 sshd[5158]: Connection closed by 10.0.0.1 port 53890 May 13 23:58:30.603320 sshd-session[5156]: pam_unix(sshd:session): session closed for user core May 13 23:58:30.608108 systemd[1]: sshd@23-10.0.0.86:22-10.0.0.1:53890.service: Deactivated successfully. May 13 23:58:30.610188 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:58:30.610959 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. May 13 23:58:30.611806 systemd-logind[1488]: Removed session 24. May 13 23:58:32.923819 containerd[1508]: time="2025-05-13T23:58:32.923772468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13bf07828e72a451eb84c8d898c18ef9179d4a0154747ef8a5e2000ddcb4a8a6\" id:\"42b2d3a7fdae9ce513fde90bae634ccb6058971160612d2837772eb79176a71f\" pid:5185 exited_at:{seconds:1747180712 nanos:923534086}" May 13 23:58:35.619963 systemd[1]: Started sshd@24-10.0.0.86:22-10.0.0.1:53906.service - OpenSSH per-connection server daemon (10.0.0.1:53906). May 13 23:58:35.662623 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 53906 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:35.664068 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:35.668421 systemd-logind[1488]: New session 25 of user core. May 13 23:58:35.686230 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:58:35.800135 sshd[5197]: Connection closed by 10.0.0.1 port 53906 May 13 23:58:35.800467 sshd-session[5195]: pam_unix(sshd:session): session closed for user core May 13 23:58:35.804912 systemd[1]: sshd@24-10.0.0.86:22-10.0.0.1:53906.service: Deactivated successfully. May 13 23:58:35.807243 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:58:35.808012 systemd-logind[1488]: Session 25 logged out. Waiting for processes to exit. May 13 23:58:35.808907 systemd-logind[1488]: Removed session 25. May 13 23:58:40.813729 systemd[1]: Started sshd@25-10.0.0.86:22-10.0.0.1:43116.service - OpenSSH per-connection server daemon (10.0.0.1:43116). May 13 23:58:40.859610 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 43116 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:40.861203 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:40.865683 systemd-logind[1488]: New session 26 of user core. May 13 23:58:40.872213 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:58:40.978759 sshd[5214]: Connection closed by 10.0.0.1 port 43116 May 13 23:58:40.979183 sshd-session[5212]: pam_unix(sshd:session): session closed for user core May 13 23:58:40.983628 systemd[1]: sshd@25-10.0.0.86:22-10.0.0.1:43116.service: Deactivated successfully. May 13 23:58:40.985875 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:58:40.986754 systemd-logind[1488]: Session 26 logged out. Waiting for processes to exit. May 13 23:58:40.987629 systemd-logind[1488]: Removed session 26. May 13 23:58:44.081860 containerd[1508]: time="2025-05-13T23:58:44.081803015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f605b59f847f2a5ae09fb8dadcf071366b906c465cb9561ffc2f2504237c503\" id:\"6a950cdc07574ca662e2e894faec58694293ed587668c6a2eef387509426400a\" pid:5238 exited_at:{seconds:1747180724 nanos:80925198}" May 13 23:58:45.996181 systemd[1]: Started sshd@26-10.0.0.86:22-10.0.0.1:43120.service - OpenSSH per-connection server daemon (10.0.0.1:43120). May 13 23:58:46.065600 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 43120 ssh2: RSA SHA256:SlU06is2ZbkjT7DPP4OtiEpWhaMgwJIZpzShXEJoVJU May 13 23:58:46.067411 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:46.071960 systemd-logind[1488]: New session 27 of user core. May 13 23:58:46.082312 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:58:46.267771 sshd[5256]: Connection closed by 10.0.0.1 port 43120 May 13 23:58:46.267988 sshd-session[5254]: pam_unix(sshd:session): session closed for user core May 13 23:58:46.271915 systemd[1]: sshd@26-10.0.0.86:22-10.0.0.1:43120.service: Deactivated successfully. May 13 23:58:46.274294 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:58:46.275094 systemd-logind[1488]: Session 27 logged out. Waiting for processes to exit. May 13 23:58:46.276064 systemd-logind[1488]: Removed session 27.