May 10 09:54:09.022487 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat May 10 08:33:52 -00 2025 May 10 09:54:09.022520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:54:09.022532 kernel: BIOS-provided physical RAM map: May 10 09:54:09.022541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 09:54:09.022549 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 09:54:09.022558 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 09:54:09.022569 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 10 09:54:09.022581 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 10 09:54:09.022590 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 09:54:09.022599 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 09:54:09.022608 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 09:54:09.022617 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 09:54:09.022629 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 09:54:09.022638 kernel: NX (Execute Disable) protection: active May 10 09:54:09.022653 kernel: APIC: Static calls initialized May 10 09:54:09.022666 kernel: SMBIOS 2.8 present. May 10 09:54:09.022676 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 10 09:54:09.022685 kernel: Hypervisor detected: KVM May 10 09:54:09.022695 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 09:54:09.022704 kernel: kvm-clock: using sched offset of 4721123222 cycles May 10 09:54:09.022714 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 09:54:09.022724 kernel: tsc: Detected 2794.748 MHz processor May 10 09:54:09.022735 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 09:54:09.022761 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 09:54:09.022771 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 10 09:54:09.022781 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 10 09:54:09.022791 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 09:54:09.022801 kernel: Using GB pages for direct mapping May 10 09:54:09.022811 kernel: ACPI: Early table checksum verification disabled May 10 09:54:09.022821 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 10 09:54:09.022831 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022845 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022855 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022865 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 10 09:54:09.022875 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022885 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022895 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022904 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 09:54:09.022914 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 10 09:54:09.022925 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 10 09:54:09.022942 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 10 09:54:09.022953 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 10 09:54:09.022963 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 10 09:54:09.022973 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 10 09:54:09.022983 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 10 09:54:09.022997 kernel: No NUMA configuration found May 10 09:54:09.023010 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 10 09:54:09.023021 kernel: NODE_DATA(0) allocated [mem 0x9cfd4000-0x9cfdbfff] May 10 09:54:09.023031 kernel: Zone ranges: May 10 09:54:09.023041 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 09:54:09.023051 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 10 09:54:09.023062 kernel: Normal empty May 10 09:54:09.023072 kernel: Device empty May 10 09:54:09.023082 kernel: Movable zone start for each node May 10 09:54:09.023092 kernel: Early memory node ranges May 10 09:54:09.023110 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 09:54:09.023124 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 10 09:54:09.023157 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 10 09:54:09.023184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 09:54:09.023204 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 09:54:09.023214 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 10 09:54:09.023224 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 09:54:09.023235 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 09:54:09.023245 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 09:54:09.023260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 09:54:09.023270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 09:54:09.023281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 09:54:09.023291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 09:54:09.023301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 09:54:09.023311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 09:54:09.023322 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 09:54:09.023331 kernel: TSC deadline timer available May 10 09:54:09.023341 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 10 09:54:09.023352 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 10 09:54:09.023372 kernel: kvm-guest: KVM setup pv remote TLB flush May 10 09:54:09.023383 kernel: kvm-guest: setup PV sched yield May 10 09:54:09.023393 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 09:54:09.023404 kernel: Booting paravirtualized kernel on KVM May 10 09:54:09.023415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 09:54:09.023426 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 10 09:54:09.023447 kernel: percpu: Embedded 58 pages/cpu s197416 r8192 d31960 u524288 May 10 09:54:09.023460 kernel: pcpu-alloc: s197416 r8192 d31960 u524288 alloc=1*2097152 May 10 09:54:09.023480 kernel: pcpu-alloc: [0] 0 1 2 3 May 10 09:54:09.023495 kernel: kvm-guest: PV spinlocks enabled May 10 09:54:09.023506 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 09:54:09.023518 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:54:09.023529 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 09:54:09.023540 kernel: random: crng init done May 10 09:54:09.023550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 09:54:09.023561 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 09:54:09.023571 kernel: Fallback order for Node 0: 0 May 10 09:54:09.023586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 10 09:54:09.023596 kernel: Policy zone: DMA32 May 10 09:54:09.023608 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 09:54:09.023621 kernel: Memory: 2436632K/2571752K available (14336K kernel code, 2309K rwdata, 9044K rodata, 53680K init, 1596K bss, 134860K reserved, 0K cma-reserved) May 10 09:54:09.023634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 10 09:54:09.023644 kernel: ftrace: allocating 38190 entries in 150 pages May 10 09:54:09.023654 kernel: ftrace: allocated 150 pages with 4 groups May 10 09:54:09.023664 kernel: Dynamic Preempt: voluntary May 10 09:54:09.023674 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 09:54:09.023693 kernel: rcu: RCU event tracing is enabled. May 10 09:54:09.023704 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 10 09:54:09.023715 kernel: Trampoline variant of Tasks RCU enabled. May 10 09:54:09.023725 kernel: Rude variant of Tasks RCU enabled. May 10 09:54:09.023736 kernel: Tracing variant of Tasks RCU enabled. May 10 09:54:09.023761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 09:54:09.023772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 10 09:54:09.023782 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 10 09:54:09.023792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 09:54:09.023807 kernel: Console: colour VGA+ 80x25 May 10 09:54:09.023818 kernel: printk: console [ttyS0] enabled May 10 09:54:09.023828 kernel: ACPI: Core revision 20230628 May 10 09:54:09.023839 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 10 09:54:09.023850 kernel: APIC: Switch to symmetric I/O mode setup May 10 09:54:09.023860 kernel: x2apic enabled May 10 09:54:09.023870 kernel: APIC: Switched APIC routing to: physical x2apic May 10 09:54:09.023881 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 10 09:54:09.023891 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 10 09:54:09.023915 kernel: kvm-guest: setup PV IPIs May 10 09:54:09.023925 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 09:54:09.023936 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 09:54:09.023950 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 10 09:54:09.023961 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 09:54:09.023972 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 10 09:54:09.023982 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 10 09:54:09.023993 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 09:54:09.024004 kernel: Spectre V2 : Mitigation: Retpolines May 10 09:54:09.024018 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 09:54:09.024029 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 10 09:54:09.024044 kernel: RETBleed: Mitigation: untrained return thunk May 10 09:54:09.024055 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 09:54:09.024066 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 10 09:54:09.024076 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 10 09:54:09.024087 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 10 09:54:09.024101 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 10 09:54:09.024112 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 09:54:09.024122 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 09:54:09.024147 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 09:54:09.024160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 09:54:09.024171 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 10 09:54:09.024182 kernel: Freeing SMP alternatives memory: 32K May 10 09:54:09.024192 kernel: pid_max: default: 32768 minimum: 301 May 10 09:54:09.024203 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 09:54:09.024217 kernel: landlock: Up and running. May 10 09:54:09.024224 kernel: SELinux: Initializing. May 10 09:54:09.024232 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:54:09.024240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 09:54:09.024248 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 10 09:54:09.024256 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:54:09.024264 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:54:09.024272 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 09:54:09.024280 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 10 09:54:09.024294 kernel: ... version: 0 May 10 09:54:09.024302 kernel: ... bit width: 48 May 10 09:54:09.024310 kernel: ... generic registers: 6 May 10 09:54:09.024318 kernel: ... value mask: 0000ffffffffffff May 10 09:54:09.024325 kernel: ... max period: 00007fffffffffff May 10 09:54:09.024333 kernel: ... fixed-purpose events: 0 May 10 09:54:09.024341 kernel: ... event mask: 000000000000003f May 10 09:54:09.024348 kernel: signal: max sigframe size: 1776 May 10 09:54:09.024356 kernel: rcu: Hierarchical SRCU implementation. May 10 09:54:09.024367 kernel: rcu: Max phase no-delay instances is 400. May 10 09:54:09.024375 kernel: smp: Bringing up secondary CPUs ... May 10 09:54:09.024383 kernel: smpboot: x86: Booting SMP configuration: May 10 09:54:09.024390 kernel: .... node #0, CPUs: #1 #2 #3 May 10 09:54:09.024398 kernel: smp: Brought up 1 node, 4 CPUs May 10 09:54:09.024406 kernel: smpboot: Max logical packages: 1 May 10 09:54:09.024414 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 10 09:54:09.024422 kernel: devtmpfs: initialized May 10 09:54:09.024429 kernel: x86/mm: Memory block size: 128MB May 10 09:54:09.024440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 09:54:09.024448 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 10 09:54:09.024456 kernel: pinctrl core: initialized pinctrl subsystem May 10 09:54:09.024464 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 09:54:09.024471 kernel: audit: initializing netlink subsys (disabled) May 10 09:54:09.024479 kernel: audit: type=2000 audit(1746870845.084:1): state=initialized audit_enabled=0 res=1 May 10 09:54:09.024487 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 09:54:09.024495 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 09:54:09.024503 kernel: cpuidle: using governor menu May 10 09:54:09.024513 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 09:54:09.024521 kernel: dca service started, version 1.12.1 May 10 09:54:09.024529 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 09:54:09.024536 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 10 09:54:09.024544 kernel: PCI: Using configuration type 1 for base access May 10 09:54:09.024552 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 09:54:09.024560 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 09:54:09.024568 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 10 09:54:09.024576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 09:54:09.024586 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 10 09:54:09.024594 kernel: ACPI: Added _OSI(Module Device) May 10 09:54:09.024602 kernel: ACPI: Added _OSI(Processor Device) May 10 09:54:09.024625 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 09:54:09.024634 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 09:54:09.024642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 09:54:09.024649 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 10 09:54:09.024657 kernel: ACPI: Interpreter enabled May 10 09:54:09.024665 kernel: ACPI: PM: (supports S0 S3 S5) May 10 09:54:09.024676 kernel: ACPI: Using IOAPIC for interrupt routing May 10 09:54:09.024684 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 09:54:09.024692 kernel: PCI: Using E820 reservations for host bridge windows May 10 09:54:09.024700 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 09:54:09.024708 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 09:54:09.024929 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 09:54:09.025065 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 10 09:54:09.025686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 10 09:54:09.025705 kernel: PCI host bridge to bus 0000:00 May 10 09:54:09.025909 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 09:54:09.026067 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 09:54:09.026250 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 09:54:09.026404 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 10 09:54:09.026555 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 09:54:09.026708 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 10 09:54:09.026880 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 09:54:09.027096 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 09:54:09.027309 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 10 09:54:09.027489 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 10 09:54:09.027671 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 10 09:54:09.028605 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 10 09:54:09.028766 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 09:54:09.028925 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 10 09:54:09.029058 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 09:54:09.029208 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 10 09:54:09.029353 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 10 09:54:09.029506 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 10 09:54:09.029640 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 10 09:54:09.029786 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 10 09:54:09.029918 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 10 09:54:09.030065 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 09:54:09.030264 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 10 09:54:09.030400 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 10 09:54:09.030530 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 10 09:54:09.030660 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 10 09:54:09.030824 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 09:54:09.030958 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 09:54:09.031122 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 09:54:09.031271 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 10 09:54:09.031401 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 10 09:54:09.031552 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 09:54:09.031689 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 09:54:09.031700 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 09:54:09.031709 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 09:54:09.031717 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 09:54:09.031725 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 09:54:09.031733 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 09:54:09.031749 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 09:54:09.031757 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 09:54:09.031764 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 09:54:09.031776 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 09:54:09.031784 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 09:54:09.031792 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 09:54:09.031800 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 09:54:09.031808 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 09:54:09.031816 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 09:54:09.031824 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 09:54:09.031832 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 09:54:09.031840 kernel: iommu: Default domain type: Translated May 10 09:54:09.031850 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 09:54:09.031858 kernel: PCI: Using ACPI for IRQ routing May 10 09:54:09.031866 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 09:54:09.031874 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 09:54:09.031882 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 10 09:54:09.032016 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 09:54:09.032167 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 09:54:09.032298 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 09:54:09.032314 kernel: vgaarb: loaded May 10 09:54:09.032323 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 10 09:54:09.032331 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 10 09:54:09.032339 kernel: clocksource: Switched to clocksource kvm-clock May 10 09:54:09.032347 kernel: VFS: Disk quotas dquot_6.6.0 May 10 09:54:09.032355 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 09:54:09.032363 kernel: pnp: PnP ACPI init May 10 09:54:09.032517 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 09:54:09.032533 kernel: pnp: PnP ACPI: found 6 devices May 10 09:54:09.032542 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 09:54:09.032550 kernel: NET: Registered PF_INET protocol family May 10 09:54:09.032558 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 09:54:09.032566 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 09:54:09.032574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 09:54:09.032582 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 09:54:09.032590 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 09:54:09.032598 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 09:54:09.032609 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:54:09.032617 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 09:54:09.032625 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 09:54:09.032633 kernel: NET: Registered PF_XDP protocol family May 10 09:54:09.032763 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 09:54:09.032884 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 09:54:09.033003 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 09:54:09.033121 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 10 09:54:09.033269 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 09:54:09.033391 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 10 09:54:09.033402 kernel: PCI: CLS 0 bytes, default 64 May 10 09:54:09.033410 kernel: Initialise system trusted keyrings May 10 09:54:09.033418 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 09:54:09.033426 kernel: Key type asymmetric registered May 10 09:54:09.033434 kernel: Asymmetric key parser 'x509' registered May 10 09:54:09.033443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 09:54:09.033451 kernel: io scheduler mq-deadline registered May 10 09:54:09.033463 kernel: io scheduler kyber registered May 10 09:54:09.033471 kernel: io scheduler bfq registered May 10 09:54:09.033479 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 09:54:09.033487 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 09:54:09.033495 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 09:54:09.033504 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 09:54:09.033511 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 09:54:09.033519 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 09:54:09.033527 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 09:54:09.033538 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 09:54:09.033546 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 09:54:09.033554 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 09:54:09.033713 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 09:54:09.033849 kernel: rtc_cmos 00:04: registered as rtc0 May 10 09:54:09.033974 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T09:54:08 UTC (1746870848) May 10 09:54:09.034097 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 10 09:54:09.034108 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 10 09:54:09.034121 kernel: NET: Registered PF_INET6 protocol family May 10 09:54:09.034129 kernel: Segment Routing with IPv6 May 10 09:54:09.034265 kernel: In-situ OAM (IOAM) with IPv6 May 10 09:54:09.034274 kernel: NET: Registered PF_PACKET protocol family May 10 09:54:09.034282 kernel: Key type dns_resolver registered May 10 09:54:09.034290 kernel: IPI shorthand broadcast: enabled May 10 09:54:09.034298 kernel: sched_clock: Marking stable (3198002700, 129082082)->(3353834897, -26750115) May 10 09:54:09.034306 kernel: registered taskstats version 1 May 10 09:54:09.034314 kernel: Loading compiled-in X.509 certificates May 10 09:54:09.034326 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f8080549509982706805ea0b811f8f4bcb4a274e' May 10 09:54:09.034334 kernel: Key type .fscrypt registered May 10 09:54:09.034343 kernel: Key type fscrypt-provisioning registered May 10 09:54:09.034351 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 09:54:09.034358 kernel: ima: Allocated hash algorithm: sha1 May 10 09:54:09.034366 kernel: ima: No architecture policies found May 10 09:54:09.034374 kernel: clk: Disabling unused clocks May 10 09:54:09.034382 kernel: Warning: unable to open an initial console. May 10 09:54:09.034390 kernel: Freeing unused kernel image (initmem) memory: 53680K May 10 09:54:09.034401 kernel: Write protecting the kernel read-only data: 24576k May 10 09:54:09.034409 kernel: Freeing unused kernel image (rodata/data gap) memory: 1196K May 10 09:54:09.034417 kernel: Run /init as init process May 10 09:54:09.034425 kernel: with arguments: May 10 09:54:09.034433 kernel: /init May 10 09:54:09.034441 kernel: with environment: May 10 09:54:09.034448 kernel: HOME=/ May 10 09:54:09.034456 kernel: TERM=linux May 10 09:54:09.034464 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 09:54:09.034479 systemd[1]: Successfully made /usr/ read-only. May 10 09:54:09.034491 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:54:09.034500 systemd[1]: Detected virtualization kvm. May 10 09:54:09.034508 systemd[1]: Detected architecture x86-64. May 10 09:54:09.034517 systemd[1]: Running in initrd. May 10 09:54:09.034525 systemd[1]: No hostname configured, using default hostname. May 10 09:54:09.034534 systemd[1]: Hostname set to . May 10 09:54:09.034546 systemd[1]: Initializing machine ID from VM UUID. May 10 09:54:09.034554 systemd[1]: Queued start job for default target initrd.target. May 10 09:54:09.034563 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:54:09.034584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:54:09.034596 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 09:54:09.034605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:54:09.034617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 09:54:09.034626 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 09:54:09.034636 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 09:54:09.034645 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 09:54:09.034654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:54:09.034663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:54:09.034671 systemd[1]: Reached target paths.target - Path Units. May 10 09:54:09.034683 systemd[1]: Reached target slices.target - Slice Units. May 10 09:54:09.034692 systemd[1]: Reached target swap.target - Swaps. May 10 09:54:09.034700 systemd[1]: Reached target timers.target - Timer Units. May 10 09:54:09.034709 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:54:09.034718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:54:09.034727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 09:54:09.034736 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 10 09:54:09.034752 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:54:09.034761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:54:09.034772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:54:09.034781 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:54:09.034790 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 09:54:09.034799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:54:09.034808 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 09:54:09.034817 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 10 09:54:09.034826 systemd[1]: Starting systemd-fsck-usr.service... May 10 09:54:09.034834 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:54:09.034846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:54:09.034855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:54:09.034863 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 09:54:09.034897 systemd-journald[194]: Collecting audit messages is disabled. May 10 09:54:09.034922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:54:09.034936 systemd[1]: Finished systemd-fsck-usr.service. May 10 09:54:09.034945 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 09:54:09.034954 systemd-journald[194]: Journal started May 10 09:54:09.034976 systemd-journald[194]: Runtime Journal (/run/log/journal/b8605973a8a04f409c0c8dcc8c1f7ce3) is 6M, max 48.6M, 42.5M free. May 10 09:54:09.023497 systemd-modules-load[196]: Inserted module 'overlay' May 10 09:54:09.038170 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:54:09.040383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:54:09.041365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:54:09.047901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:54:09.083519 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 09:54:09.083541 kernel: Bridge firewalling registered May 10 09:54:09.053574 systemd-modules-load[196]: Inserted module 'br_netfilter' May 10 09:54:09.089320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:54:09.089830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:09.094065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 09:54:09.119860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:54:09.121197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:54:09.124442 systemd-tmpfiles[210]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 10 09:54:09.130872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:54:09.149306 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:54:09.191380 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 09:54:09.218377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:54:09.270427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:54:09.284001 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cdff7a1e66558670c3a31fd90f395811dccc4cb131ce51930f033b8634f7f080 May 10 09:54:09.362490 systemd-resolved[237]: Positive Trust Anchors: May 10 09:54:09.362507 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:54:09.362538 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:54:09.388159 kernel: SCSI subsystem initialized May 10 09:54:09.390037 systemd-resolved[237]: Defaulting to hostname 'linux'. May 10 09:54:09.392674 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:54:09.393022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:54:09.400161 kernel: Loading iSCSI transport class v2.0-870. May 10 09:54:09.430160 kernel: iscsi: registered transport (tcp) May 10 09:54:09.451415 kernel: iscsi: registered transport (qla4xxx) May 10 09:54:09.451440 kernel: QLogic iSCSI HBA Driver May 10 09:54:09.472878 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:54:09.512293 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:54:09.517466 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:54:09.573764 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 09:54:09.593648 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 09:54:09.654179 kernel: raid6: avx2x4 gen() 30414 MB/s May 10 09:54:09.671168 kernel: raid6: avx2x2 gen() 31157 MB/s May 10 09:54:09.688264 kernel: raid6: avx2x1 gen() 25602 MB/s May 10 09:54:09.688295 kernel: raid6: using algorithm avx2x2 gen() 31157 MB/s May 10 09:54:09.720359 kernel: raid6: .... xor() 19743 MB/s, rmw enabled May 10 09:54:09.720484 kernel: raid6: using avx2x2 recovery algorithm May 10 09:54:09.744181 kernel: xor: automatically using best checksumming function avx May 10 09:54:09.892179 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 09:54:09.902604 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 09:54:09.917300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:54:09.949290 systemd-udevd[447]: Using default interface naming scheme 'v255'. May 10 09:54:09.959288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:54:09.963050 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 09:54:09.989975 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation May 10 09:54:10.023081 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:54:10.025826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:54:10.108219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:54:10.115385 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 09:54:10.161298 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 10 09:54:10.161588 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 10 09:54:10.170168 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 10 09:54:10.173169 kernel: cryptd: max_cpu_qlen set to 1000 May 10 09:54:10.180683 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 09:54:10.180754 kernel: GPT:9289727 != 19775487 May 10 09:54:10.180770 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 09:54:10.180786 kernel: GPT:9289727 != 19775487 May 10 09:54:10.180800 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 09:54:10.180816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:54:10.182349 kernel: libata version 3.00 loaded. May 10 09:54:10.192167 kernel: ahci 0000:00:1f.2: version 3.0 May 10 09:54:10.196564 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 09:54:10.196617 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 09:54:10.196854 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 09:54:10.197861 kernel: AVX2 version of gcm_enc/dec engaged. May 10 09:54:10.199154 kernel: AES CTR mode by8 optimization enabled May 10 09:54:10.200170 kernel: scsi host0: ahci May 10 09:54:10.202167 kernel: scsi host1: ahci May 10 09:54:10.209428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:54:10.211449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:10.214980 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:54:10.219290 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (501) May 10 09:54:10.219892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:54:10.223178 kernel: scsi host2: ahci May 10 09:54:10.223718 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 10 09:54:10.227289 kernel: BTRFS: device fsid 447a9416-2d70-470c-8858-df3b82fa5271 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (500) May 10 09:54:10.227305 kernel: scsi host3: ahci May 10 09:54:10.231163 kernel: scsi host4: ahci May 10 09:54:10.236816 kernel: scsi host5: ahci May 10 09:54:10.237113 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 10 09:54:10.237158 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 10 09:54:10.237174 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 10 09:54:10.237188 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 10 09:54:10.237202 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 10 09:54:10.237217 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 10 09:54:10.255661 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 10 09:54:10.292940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:10.303680 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 10 09:54:10.321463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:54:10.331259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 10 09:54:10.332789 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 10 09:54:10.336752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 09:54:10.366360 disk-uuid[586]: Primary Header is updated. May 10 09:54:10.366360 disk-uuid[586]: Secondary Entries is updated. May 10 09:54:10.366360 disk-uuid[586]: Secondary Header is updated. May 10 09:54:10.371193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:54:10.375175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:54:10.547683 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 09:54:10.547802 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 09:54:10.547818 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 09:54:10.549174 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 09:54:10.550182 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 10 09:54:10.551172 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 09:54:10.552185 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 10 09:54:10.552210 kernel: ata3.00: applying bridge limits May 10 09:54:10.553175 kernel: ata3.00: configured for UDMA/100 May 10 09:54:10.554174 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 09:54:10.602196 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 10 09:54:10.602609 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 09:54:10.615187 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 10 09:54:10.899038 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 09:54:10.901835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:54:10.904575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:54:10.906933 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:54:10.910224 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 09:54:10.941357 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 09:54:11.378163 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 09:54:11.378366 disk-uuid[587]: The operation has completed successfully. May 10 09:54:11.411028 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 09:54:11.411203 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 09:54:11.452365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 09:54:11.475455 sh[627]: Success May 10 09:54:11.497399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 09:54:11.497431 kernel: device-mapper: uevent: version 1.0.3 May 10 09:54:11.498641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 09:54:11.508166 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 10 09:54:11.541783 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 09:54:11.545810 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 09:54:11.569605 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 09:54:11.577984 kernel: BTRFS info (device dm-0): first mount of filesystem 447a9416-2d70-470c-8858-df3b82fa5271 May 10 09:54:11.578021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:11.578036 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 09:54:11.579066 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 09:54:11.579817 kernel: BTRFS info (device dm-0): using free space tree May 10 09:54:11.585474 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 09:54:11.586678 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 10 09:54:11.588657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 09:54:11.589614 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 09:54:11.591809 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 09:54:11.615709 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:11.615751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:11.615765 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:11.619186 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:11.623199 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:11.629194 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 09:54:11.631453 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 09:54:11.722016 ignition[715]: Ignition 2.21.0 May 10 09:54:11.722036 ignition[715]: Stage: fetch-offline May 10 09:54:11.722089 ignition[715]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:11.722101 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:11.722268 ignition[715]: parsed url from cmdline: "" May 10 09:54:11.722274 ignition[715]: no config URL provided May 10 09:54:11.722281 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" May 10 09:54:11.722294 ignition[715]: no config at "/usr/lib/ignition/user.ign" May 10 09:54:11.722326 ignition[715]: op(1): [started] loading QEMU firmware config module May 10 09:54:11.722332 ignition[715]: op(1): executing: "modprobe" "qemu_fw_cfg" May 10 09:54:11.733327 ignition[715]: op(1): [finished] loading QEMU firmware config module May 10 09:54:11.739520 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:54:11.743114 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:54:11.782662 ignition[715]: parsing config with SHA512: 86f55306cf2270a4dc80d9b3e4b0db9b5ab1ed6151dd73664e8aaaca96c37479e3d2dd0a2203650bfbb4839155755c5771271b8c101961ca24ab995b61a4c37a May 10 09:54:11.789762 unknown[715]: fetched base config from "system" May 10 09:54:11.789778 unknown[715]: fetched user config from "qemu" May 10 09:54:11.790336 ignition[715]: fetch-offline: fetch-offline passed May 10 09:54:11.790439 ignition[715]: Ignition finished successfully May 10 09:54:11.794572 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:54:11.795302 systemd-networkd[816]: lo: Link UP May 10 09:54:11.795306 systemd-networkd[816]: lo: Gained carrier May 10 09:54:11.796981 systemd-networkd[816]: Enumeration completed May 10 09:54:11.797227 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:54:11.797386 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:11.797390 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:54:11.798572 systemd-networkd[816]: eth0: Link UP May 10 09:54:11.798575 systemd-networkd[816]: eth0: Gained carrier May 10 09:54:11.798583 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:11.799897 systemd[1]: Reached target network.target - Network. May 10 09:54:11.801485 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 10 09:54:11.802593 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 09:54:11.826178 systemd-networkd[816]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 09:54:11.852570 ignition[820]: Ignition 2.21.0 May 10 09:54:11.853175 ignition[820]: Stage: kargs May 10 09:54:11.854374 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:11.854392 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:11.857912 ignition[820]: kargs: kargs passed May 10 09:54:11.857979 ignition[820]: Ignition finished successfully May 10 09:54:11.863725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 09:54:11.865448 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 09:54:11.903298 ignition[829]: Ignition 2.21.0 May 10 09:54:11.903311 ignition[829]: Stage: disks May 10 09:54:11.903456 ignition[829]: no configs at "/usr/lib/ignition/base.d" May 10 09:54:11.903468 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:11.904977 ignition[829]: disks: disks passed May 10 09:54:11.905060 ignition[829]: Ignition finished successfully May 10 09:54:11.909337 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 09:54:11.910110 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 09:54:11.911720 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 09:54:11.912034 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:54:11.912556 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:54:11.912887 systemd[1]: Reached target basic.target - Basic System. May 10 09:54:11.914612 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 09:54:11.946553 systemd-resolved[237]: Detected conflict on linux IN A 10.0.0.34 May 10 09:54:11.946570 systemd-resolved[237]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 10 09:54:11.950222 systemd-fsck[839]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 10 09:54:11.954755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 09:54:11.957561 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 09:54:12.061164 kernel: EXT4-fs (vda9): mounted filesystem f8cce592-76ea-4219-9560-1ef21b28761f r/w with ordered data mode. Quota mode: none. May 10 09:54:12.061763 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 09:54:12.062702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 09:54:12.065988 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:54:12.067934 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 09:54:12.069351 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 10 09:54:12.069392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 09:54:12.069416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:54:12.088959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 09:54:12.091937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 09:54:12.094730 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (847) May 10 09:54:12.096752 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:12.096777 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:12.096788 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:12.100166 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:12.110675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:54:12.143042 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory May 10 09:54:12.147624 initrd-setup-root[878]: cut: /sysroot/etc/group: No such file or directory May 10 09:54:12.152461 initrd-setup-root[885]: cut: /sysroot/etc/shadow: No such file or directory May 10 09:54:12.157437 initrd-setup-root[892]: cut: /sysroot/etc/gshadow: No such file or directory May 10 09:54:12.254060 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 09:54:12.256049 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 09:54:12.258415 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 09:54:12.278166 kernel: BTRFS info (device vda6): last unmount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:12.305993 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 09:54:12.418657 ignition[964]: INFO : Ignition 2.21.0 May 10 09:54:12.418657 ignition[964]: INFO : Stage: mount May 10 09:54:12.420806 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:12.420806 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:12.420806 ignition[964]: INFO : mount: mount passed May 10 09:54:12.420806 ignition[964]: INFO : Ignition finished successfully May 10 09:54:12.427555 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 09:54:12.454272 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 09:54:12.577843 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 09:54:12.579962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 09:54:12.601202 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (973) May 10 09:54:12.603288 kernel: BTRFS info (device vda6): first mount of filesystem b607f6a7-c99c-4217-b084-4c38060efb12 May 10 09:54:12.603313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 09:54:12.603324 kernel: BTRFS info (device vda6): using free space tree May 10 09:54:12.607265 kernel: BTRFS info (device vda6): auto enabling async discard May 10 09:54:12.608869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 09:54:12.646163 ignition[990]: INFO : Ignition 2.21.0 May 10 09:54:12.646163 ignition[990]: INFO : Stage: files May 10 09:54:12.647950 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:12.647950 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:12.650757 ignition[990]: DEBUG : files: compiled without relabeling support, skipping May 10 09:54:12.652697 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 09:54:12.652697 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 09:54:12.657622 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 09:54:12.659129 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 09:54:12.661172 unknown[990]: wrote ssh authorized keys file for user: core May 10 09:54:12.662489 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 09:54:12.665027 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:54:12.680094 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 09:54:12.728658 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 09:54:12.833564 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 09:54:12.833564 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 09:54:12.861062 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 10 09:54:13.226298 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 10 09:54:13.774311 systemd-networkd[816]: eth0: Gained IPv6LL May 10 09:54:13.906247 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 09:54:13.906247 ignition[990]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 10 09:54:13.923398 ignition[990]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 10 09:54:13.943530 ignition[990]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 10 09:54:13.947568 ignition[990]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 10 09:54:13.949128 ignition[990]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 10 09:54:13.949128 ignition[990]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 10 09:54:13.949128 ignition[990]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 10 09:54:13.949128 ignition[990]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 09:54:13.949128 ignition[990]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 09:54:13.949128 ignition[990]: INFO : files: files passed May 10 09:54:13.949128 ignition[990]: INFO : Ignition finished successfully May 10 09:54:13.951494 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 09:54:13.955518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 09:54:13.957885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 09:54:13.971965 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 09:54:13.972107 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 09:54:13.975304 initrd-setup-root-after-ignition[1019]: grep: /sysroot/oem/oem-release: No such file or directory May 10 09:54:13.976705 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:13.976705 initrd-setup-root-after-ignition[1021]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:13.982013 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 09:54:13.977957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:54:13.979988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 09:54:13.982869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 09:54:14.040882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 09:54:14.041008 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 09:54:14.043304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 09:54:14.045352 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 09:54:14.047390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 09:54:14.048204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 09:54:14.078642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:54:14.081344 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 09:54:14.106626 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 09:54:14.107894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:54:14.110118 systemd[1]: Stopped target timers.target - Timer Units. May 10 09:54:14.112131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 09:54:14.112257 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 09:54:14.114409 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 09:54:14.116122 systemd[1]: Stopped target basic.target - Basic System. May 10 09:54:14.118129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 09:54:14.120128 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 09:54:14.122128 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 09:54:14.124374 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 10 09:54:14.127013 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 09:54:14.129345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 09:54:14.131497 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 09:54:14.133497 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 09:54:14.135705 systemd[1]: Stopped target swap.target - Swaps. May 10 09:54:14.137493 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 09:54:14.137670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 09:54:14.139787 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 09:54:14.141470 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:54:14.143560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 09:54:14.143748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:54:14.145790 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 09:54:14.145915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 09:54:14.148111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 09:54:14.148248 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 09:54:14.150252 systemd[1]: Stopped target paths.target - Path Units. May 10 09:54:14.151956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 09:54:14.157215 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:54:14.158849 systemd[1]: Stopped target slices.target - Slice Units. May 10 09:54:14.160751 systemd[1]: Stopped target sockets.target - Socket Units. May 10 09:54:14.162941 systemd[1]: iscsid.socket: Deactivated successfully. May 10 09:54:14.163076 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 09:54:14.165679 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 09:54:14.165807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 09:54:14.167800 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 09:54:14.167961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 09:54:14.170263 systemd[1]: ignition-files.service: Deactivated successfully. May 10 09:54:14.170415 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 09:54:14.173419 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 09:54:14.174741 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 09:54:14.176331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 09:54:14.176495 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:54:14.178793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 09:54:14.178944 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 09:54:14.193392 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 09:54:14.193507 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 09:54:14.206939 ignition[1045]: INFO : Ignition 2.21.0 May 10 09:54:14.206939 ignition[1045]: INFO : Stage: umount May 10 09:54:14.208734 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 09:54:14.208734 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 09:54:14.208734 ignition[1045]: INFO : umount: umount passed May 10 09:54:14.208734 ignition[1045]: INFO : Ignition finished successfully May 10 09:54:14.210514 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 09:54:14.210685 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 09:54:14.213292 systemd[1]: Stopped target network.target - Network. May 10 09:54:14.213596 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 09:54:14.213666 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 09:54:14.213955 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 09:54:14.214001 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 09:54:14.217924 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 09:54:14.217985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 09:54:14.218466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 09:54:14.218517 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 09:54:14.221970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 09:54:14.224224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 09:54:14.231774 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 09:54:14.231961 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 09:54:14.236276 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 10 09:54:14.237288 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 09:54:14.237394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:54:14.242620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 10 09:54:14.243016 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 09:54:14.243203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 09:54:14.247316 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 10 09:54:14.248041 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 10 09:54:14.249481 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 09:54:14.249534 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 09:54:14.250804 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 09:54:14.253876 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 09:54:14.253944 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 09:54:14.254745 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 09:54:14.254804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 09:54:14.266649 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 09:54:14.266712 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 09:54:14.269557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:54:14.271670 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 09:54:14.280290 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 09:54:14.295453 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 09:54:14.295722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:54:14.298504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 09:54:14.298575 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 09:54:14.300766 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 09:54:14.300821 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:54:14.303022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 09:54:14.303095 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 09:54:14.305657 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 09:54:14.305725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 09:54:14.307875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 09:54:14.307941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 09:54:14.311471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 09:54:14.314209 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 10 09:54:14.314283 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:54:14.320129 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 09:54:14.321277 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:54:14.326912 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 10 09:54:14.328020 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:54:14.330818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 09:54:14.330874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:54:14.334225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 09:54:14.335263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:14.338287 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 09:54:14.339399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 09:54:14.341843 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 09:54:14.343189 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 09:54:14.666206 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 09:54:14.666352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 09:54:14.667550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 09:54:14.673452 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 09:54:14.673550 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 09:54:14.676638 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 09:54:14.697628 systemd[1]: Switching root. May 10 09:54:14.733953 systemd-journald[194]: Journal stopped May 10 09:54:16.021011 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 10 09:54:16.021086 kernel: SELinux: policy capability network_peer_controls=1 May 10 09:54:16.021106 kernel: SELinux: policy capability open_perms=1 May 10 09:54:16.021118 kernel: SELinux: policy capability extended_socket_class=1 May 10 09:54:16.021130 kernel: SELinux: policy capability always_check_network=0 May 10 09:54:16.021153 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 09:54:16.021165 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 09:54:16.021183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 09:54:16.021194 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 09:54:16.021210 kernel: audit: type=1403 audit(1746870855.159:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 09:54:16.021223 systemd[1]: Successfully loaded SELinux policy in 44.158ms. May 10 09:54:16.021247 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.575ms. May 10 09:54:16.021262 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 10 09:54:16.021275 systemd[1]: Detected virtualization kvm. May 10 09:54:16.021292 systemd[1]: Detected architecture x86-64. May 10 09:54:16.021305 systemd[1]: Detected first boot. May 10 09:54:16.021316 systemd[1]: Initializing machine ID from VM UUID. May 10 09:54:16.021329 zram_generator::config[1091]: No configuration found. May 10 09:54:16.021346 kernel: Guest personality initialized and is inactive May 10 09:54:16.021357 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 10 09:54:16.021369 kernel: Initialized host personality May 10 09:54:16.021381 kernel: NET: Registered PF_VSOCK protocol family May 10 09:54:16.021392 systemd[1]: Populated /etc with preset unit settings. May 10 09:54:16.021406 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 10 09:54:16.021418 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 09:54:16.021431 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 09:54:16.021443 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 09:54:16.021459 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 09:54:16.021472 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 09:54:16.021484 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 09:54:16.021496 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 09:54:16.021509 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 09:54:16.021527 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 09:54:16.021541 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 09:54:16.021562 systemd[1]: Created slice user.slice - User and Session Slice. May 10 09:54:16.021578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 09:54:16.021590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 09:54:16.021604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 09:54:16.021616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 09:54:16.021628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 09:54:16.021641 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 09:54:16.021654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 10 09:54:16.021666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 09:54:16.021681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 09:54:16.021693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 09:54:16.021706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 09:54:16.021718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 09:54:16.021730 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 09:54:16.021743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 09:54:16.021756 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 09:54:16.021768 systemd[1]: Reached target slices.target - Slice Units. May 10 09:54:16.021780 systemd[1]: Reached target swap.target - Swaps. May 10 09:54:16.021795 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 09:54:16.021808 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 09:54:16.021822 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 10 09:54:16.021834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 09:54:16.021846 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 09:54:16.021858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 09:54:16.021870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 09:54:16.021883 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 09:54:16.021895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 09:54:16.021910 systemd[1]: Mounting media.mount - External Media Directory... May 10 09:54:16.021923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:16.021936 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 09:54:16.021948 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 09:54:16.021960 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 09:54:16.021979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 09:54:16.021991 systemd[1]: Reached target machines.target - Containers. May 10 09:54:16.022004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 09:54:16.022019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:16.022031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 09:54:16.022044 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 09:54:16.022056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:16.022068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:54:16.022080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:16.022093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 09:54:16.022106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:16.022118 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 09:54:16.022134 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 09:54:16.022158 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 09:54:16.022170 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 09:54:16.022183 systemd[1]: Stopped systemd-fsck-usr.service. May 10 09:54:16.022196 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:16.022208 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 09:54:16.022220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 09:54:16.022233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 09:54:16.022245 kernel: loop: module loaded May 10 09:54:16.022260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 09:54:16.022272 kernel: fuse: init (API version 7.39) May 10 09:54:16.022284 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 10 09:54:16.022296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 09:54:16.022311 systemd[1]: verity-setup.service: Deactivated successfully. May 10 09:54:16.022323 systemd[1]: Stopped verity-setup.service. May 10 09:54:16.022336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:16.022349 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 09:54:16.022361 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 09:54:16.022399 systemd-journald[1155]: Collecting audit messages is disabled. May 10 09:54:16.022421 kernel: ACPI: bus type drm_connector registered May 10 09:54:16.022433 systemd[1]: Mounted media.mount - External Media Directory. May 10 09:54:16.022449 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 09:54:16.022464 systemd-journald[1155]: Journal started May 10 09:54:16.022486 systemd-journald[1155]: Runtime Journal (/run/log/journal/b8605973a8a04f409c0c8dcc8c1f7ce3) is 6M, max 48.6M, 42.5M free. May 10 09:54:15.757847 systemd[1]: Queued start job for default target multi-user.target. May 10 09:54:15.772345 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 10 09:54:15.772891 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 09:54:16.025546 systemd[1]: Started systemd-journald.service - Journal Service. May 10 09:54:16.026424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 09:54:16.027804 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 09:54:16.029212 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 09:54:16.030927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 09:54:16.032881 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 09:54:16.033244 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 09:54:16.034799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:16.035026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:16.036754 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:54:16.036982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:54:16.038480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:16.038722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:16.040441 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 09:54:16.040700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 09:54:16.042163 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:16.042405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:16.044103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 09:54:16.045640 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 09:54:16.047403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 09:54:16.049133 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 10 09:54:16.066961 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 09:54:16.069901 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 09:54:16.072351 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 09:54:16.073704 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 09:54:16.073790 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 09:54:16.075954 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 10 09:54:16.091056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 09:54:16.092361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:16.093901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 09:54:16.096202 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 09:54:16.097524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:54:16.101215 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 09:54:16.102526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:54:16.107631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 09:54:16.110289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 09:54:16.114446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 09:54:16.117703 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 09:54:16.119519 systemd-journald[1155]: Time spent on flushing to /var/log/journal/b8605973a8a04f409c0c8dcc8c1f7ce3 is 20.068ms for 974 entries. May 10 09:54:16.119519 systemd-journald[1155]: System Journal (/var/log/journal/b8605973a8a04f409c0c8dcc8c1f7ce3) is 8M, max 195.6M, 187.6M free. May 10 09:54:16.159360 systemd-journald[1155]: Received client request to flush runtime journal. May 10 09:54:16.159414 kernel: loop0: detected capacity change from 0 to 205544 May 10 09:54:16.123333 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 09:54:16.128232 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 09:54:16.130758 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 09:54:16.136911 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 10 09:54:16.149877 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 09:54:16.161462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 09:54:16.165573 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. May 10 09:54:16.165591 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. May 10 09:54:16.167122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 09:54:16.172267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 09:54:16.177214 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 09:54:16.177012 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 09:54:16.182491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 10 09:54:16.204219 kernel: loop1: detected capacity change from 0 to 146240 May 10 09:54:16.220962 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 09:54:16.226012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 09:54:16.244184 kernel: loop2: detected capacity change from 0 to 113872 May 10 09:54:16.256230 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. May 10 09:54:16.256252 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. May 10 09:54:16.262518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 09:54:16.287170 kernel: loop3: detected capacity change from 0 to 205544 May 10 09:54:16.302218 kernel: loop4: detected capacity change from 0 to 146240 May 10 09:54:16.317179 kernel: loop5: detected capacity change from 0 to 113872 May 10 09:54:16.329892 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 10 09:54:16.330778 (sd-merge)[1236]: Merged extensions into '/usr'. May 10 09:54:16.336458 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... May 10 09:54:16.336482 systemd[1]: Reloading... May 10 09:54:16.426622 zram_generator::config[1264]: No configuration found. May 10 09:54:16.493214 ldconfig[1205]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 09:54:16.554706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:16.643763 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 09:54:16.643896 systemd[1]: Reloading finished in 306 ms. May 10 09:54:16.665911 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 09:54:16.667643 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 09:54:16.686000 systemd[1]: Starting ensure-sysext.service... May 10 09:54:16.688228 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 09:54:16.705113 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... May 10 09:54:16.705134 systemd[1]: Reloading... May 10 09:54:16.715356 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 10 09:54:16.715406 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 10 09:54:16.715835 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 09:54:16.716121 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 09:54:16.717250 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 09:54:16.717617 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. May 10 09:54:16.717700 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. May 10 09:54:16.722857 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:54:16.722937 systemd-tmpfiles[1301]: Skipping /boot May 10 09:54:16.747775 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. May 10 09:54:16.747800 systemd-tmpfiles[1301]: Skipping /boot May 10 09:54:16.771217 zram_generator::config[1328]: No configuration found. May 10 09:54:16.873875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:16.956849 systemd[1]: Reloading finished in 251 ms. May 10 09:54:16.972316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 09:54:16.992179 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 09:54:17.002612 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:17.005797 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 09:54:17.009101 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 09:54:17.019319 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 09:54:17.024257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 09:54:17.029386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 09:54:17.038650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.038848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:17.041340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:17.046176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:17.058970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:17.060287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:17.060570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:17.066435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 09:54:17.067620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.070266 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 09:54:17.071206 systemd-udevd[1372]: Using default interface naming scheme 'v255'. May 10 09:54:17.072786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:17.072864 augenrules[1396]: No rules May 10 09:54:17.079548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:17.081676 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:17.081939 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:17.083608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:17.083837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:17.085854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:17.086185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:17.099312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 09:54:17.102395 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 09:54:17.109999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.110455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:17.114360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:17.120227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:17.125531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:17.129431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:17.129601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:17.137376 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 09:54:17.141653 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 09:54:17.143337 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 09:54:17.143455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.145718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 09:54:17.155031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:17.155476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:17.158959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:17.159200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:17.162773 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:17.163022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:17.173638 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 09:54:17.201267 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 09:54:17.217953 systemd[1]: Finished ensure-sysext.service. May 10 09:54:17.235252 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 10 09:54:17.241544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.244407 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:17.246433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 09:54:17.251704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1429) May 10 09:54:17.249910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 09:54:17.258921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 09:54:17.263380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 09:54:17.267383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 09:54:17.271394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 09:54:17.271447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 10 09:54:17.273948 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 09:54:17.275225 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 09:54:17.275249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 09:54:17.275922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 09:54:17.276246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 09:54:17.287014 kernel: mousedev: PS/2 mouse device common for all mice May 10 09:54:17.288114 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 09:54:17.288508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 09:54:17.289796 augenrules[1461]: /sbin/augenrules: No change May 10 09:54:17.292268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 09:54:17.293434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 09:54:17.299400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 09:54:17.301508 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 09:54:17.301797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 09:54:17.303867 augenrules[1486]: No rules May 10 09:54:17.314751 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:17.315267 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:17.316211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 10 09:54:17.333215 kernel: ACPI: button: Power Button [PWRF] May 10 09:54:17.334445 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 09:54:17.335738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 09:54:17.335814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 09:54:17.341928 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 09:54:17.342241 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 09:54:17.342436 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 09:54:17.371209 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 09:54:17.388403 systemd-networkd[1438]: lo: Link UP May 10 09:54:17.388749 systemd-networkd[1438]: lo: Gained carrier May 10 09:54:17.391685 systemd-networkd[1438]: Enumeration completed May 10 09:54:17.391843 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 09:54:17.393201 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:17.394691 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 09:54:17.395582 systemd-networkd[1438]: eth0: Link UP May 10 09:54:17.395641 systemd-networkd[1438]: eth0: Gained carrier May 10 09:54:17.395697 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 09:54:17.398339 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 10 09:54:17.405050 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 09:54:17.411556 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 09:54:17.428280 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 10 09:54:17.440632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 09:54:17.484662 systemd-resolved[1370]: Positive Trust Anchors: May 10 09:54:17.484682 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 09:54:17.484715 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 09:54:17.496896 kernel: kvm_amd: TSC scaling supported May 10 09:54:17.496987 kernel: kvm_amd: Nested Virtualization enabled May 10 09:54:17.497017 kernel: kvm_amd: Nested Paging enabled May 10 09:54:17.497042 kernel: kvm_amd: LBR virtualization supported May 10 09:54:17.497070 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 10 09:54:17.497098 kernel: kvm_amd: Virtual GIF supported May 10 09:54:17.493675 systemd-resolved[1370]: Defaulting to hostname 'linux'. May 10 09:54:17.500970 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 09:54:17.502227 systemd[1]: Reached target network.target - Network. May 10 09:54:17.503787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 09:54:17.569560 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 09:54:17.571402 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 10 09:54:17.571791 systemd-timesyncd[1473]: Initial clock synchronization to Sat 2025-05-10 09:54:17.373078 UTC. May 10 09:54:17.583185 kernel: EDAC MC: Ver: 3.0.0 May 10 09:54:17.584214 systemd[1]: Reached target time-set.target - System Time Set. May 10 09:54:17.591697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 09:54:17.593288 systemd[1]: Reached target sysinit.target - System Initialization. May 10 09:54:17.594615 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 09:54:17.595917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 09:54:17.597266 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 10 09:54:17.598617 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 09:54:17.599840 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 09:54:17.601119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 09:54:17.602419 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 09:54:17.602450 systemd[1]: Reached target paths.target - Path Units. May 10 09:54:17.603420 systemd[1]: Reached target timers.target - Timer Units. May 10 09:54:17.605727 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 09:54:17.608570 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 09:54:17.612676 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 10 09:54:17.614128 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 10 09:54:17.615482 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 10 09:54:17.619459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 09:54:17.620948 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 10 09:54:17.622778 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 09:54:17.624655 systemd[1]: Reached target sockets.target - Socket Units. May 10 09:54:17.625644 systemd[1]: Reached target basic.target - Basic System. May 10 09:54:17.626639 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 09:54:17.626667 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 09:54:17.627740 systemd[1]: Starting containerd.service - containerd container runtime... May 10 09:54:17.629880 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 09:54:17.631907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 09:54:17.640238 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 09:54:17.642498 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 09:54:17.643629 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 09:54:17.644874 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 10 09:54:17.648907 jq[1529]: false May 10 09:54:17.649922 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 09:54:17.655009 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing passwd entry cache May 10 09:54:17.655018 oslogin_cache_refresh[1531]: Refreshing passwd entry cache May 10 09:54:17.656236 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 09:54:17.658923 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 09:54:17.662164 extend-filesystems[1530]: Found loop3 May 10 09:54:17.662164 extend-filesystems[1530]: Found loop4 May 10 09:54:17.662164 extend-filesystems[1530]: Found loop5 May 10 09:54:17.662164 extend-filesystems[1530]: Found sr0 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda May 10 09:54:17.662164 extend-filesystems[1530]: Found vda1 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda2 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda3 May 10 09:54:17.662164 extend-filesystems[1530]: Found usr May 10 09:54:17.662164 extend-filesystems[1530]: Found vda4 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda6 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda7 May 10 09:54:17.662164 extend-filesystems[1530]: Found vda9 May 10 09:54:17.662164 extend-filesystems[1530]: Checking size of /dev/vda9 May 10 09:54:17.688756 extend-filesystems[1530]: Resized partition /dev/vda9 May 10 09:54:17.662303 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 09:54:17.675278 oslogin_cache_refresh[1531]: Failure getting users, quitting May 10 09:54:17.692452 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting users, quitting May 10 09:54:17.692452 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:54:17.692452 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Refreshing group entry cache May 10 09:54:17.692452 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Failure getting groups, quitting May 10 09:54:17.692452 google_oslogin_nss_cache[1531]: oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:54:17.692607 extend-filesystems[1551]: resize2fs 1.47.2 (1-Jan-2025) May 10 09:54:17.702711 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 10 09:54:17.702738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1410) May 10 09:54:17.670249 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 09:54:17.675299 oslogin_cache_refresh[1531]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 10 09:54:17.673930 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 09:54:17.675361 oslogin_cache_refresh[1531]: Refreshing group entry cache May 10 09:54:17.674567 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 09:54:17.680955 oslogin_cache_refresh[1531]: Failure getting groups, quitting May 10 09:54:17.676012 systemd[1]: Starting update-engine.service - Update Engine... May 10 09:54:17.680966 oslogin_cache_refresh[1531]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 10 09:54:17.685040 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 09:54:17.705328 jq[1550]: true May 10 09:54:17.694408 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 09:54:17.699474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 09:54:17.699841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 09:54:17.701045 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 10 09:54:17.705725 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 10 09:54:17.708561 systemd[1]: motdgen.service: Deactivated successfully. May 10 09:54:17.708884 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 09:54:17.712118 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 09:54:17.712934 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 09:54:17.721524 update_engine[1545]: I20250510 09:54:17.715888 1545 main.cc:92] Flatcar Update Engine starting May 10 09:54:17.726757 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 10 09:54:17.735672 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 09:54:17.747435 jq[1555]: true May 10 09:54:17.749220 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 09:54:17.749220 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 09:54:17.749220 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 10 09:54:17.755871 extend-filesystems[1530]: Resized filesystem in /dev/vda9 May 10 09:54:17.750774 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 09:54:17.751208 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 09:54:17.766837 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) May 10 09:54:17.767128 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 09:54:17.767518 systemd-logind[1541]: New seat seat0. May 10 09:54:17.770362 systemd[1]: Started systemd-logind.service - User Login Management. May 10 09:54:17.773486 tar[1554]: linux-amd64/helm May 10 09:54:17.797669 dbus-daemon[1527]: [system] SELinux support is enabled May 10 09:54:17.799424 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 09:54:17.804418 update_engine[1545]: I20250510 09:54:17.804229 1545 update_check_scheduler.cc:74] Next update check in 4m4s May 10 09:54:17.814288 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 09:54:17.814336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 09:54:17.816046 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 09:54:17.816078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 09:54:17.819962 systemd[1]: Started update-engine.service - Update Engine. May 10 09:54:17.824697 dbus-daemon[1527]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 09:54:17.828190 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 09:54:17.916909 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 09:54:18.622168 bash[1585]: Updated "/home/core/.ssh/authorized_keys" May 10 09:54:18.622872 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 09:54:18.626409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 09:54:18.634490 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 09:54:18.637241 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 10 09:54:18.685252 systemd[1]: issuegen.service: Deactivated successfully. May 10 09:54:18.685583 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 09:54:18.693080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 09:54:18.695943 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 09:54:18.731949 systemd-networkd[1438]: eth0: Gained IPv6LL May 10 09:54:18.735214 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 09:54:18.738306 systemd[1]: Reached target network-online.target - Network is Online. May 10 09:54:18.742669 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 10 09:54:18.770644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:18.780163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 09:54:18.877098 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 09:54:18.904570 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 09:54:18.911553 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 10 09:54:18.913166 systemd[1]: Reached target getty.target - Login Prompts. May 10 09:54:18.919324 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 09:54:18.953335 systemd[1]: coreos-metadata.service: Deactivated successfully. May 10 09:54:18.953660 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 10 09:54:18.970092 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 09:54:19.060730 containerd[1557]: time="2025-05-10T09:54:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 10 09:54:19.064189 containerd[1557]: time="2025-05-10T09:54:19.064039133Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 10 09:54:19.078362 containerd[1557]: time="2025-05-10T09:54:19.078281003Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.673µs" May 10 09:54:19.078362 containerd[1557]: time="2025-05-10T09:54:19.078350447Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 10 09:54:19.078362 containerd[1557]: time="2025-05-10T09:54:19.078370883Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.078609364Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.078627742Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.078661313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.078745018Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.078756996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.079090190Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.079103490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.079124887Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 10 09:54:19.079171 containerd[1557]: time="2025-05-10T09:54:19.079168651Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 10 09:54:19.079375 containerd[1557]: time="2025-05-10T09:54:19.079283613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 10 09:54:19.079606 containerd[1557]: time="2025-05-10T09:54:19.079575033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:54:19.079641 containerd[1557]: time="2025-05-10T09:54:19.079611809Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 10 09:54:19.079641 containerd[1557]: time="2025-05-10T09:54:19.079623051Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 10 09:54:19.079679 containerd[1557]: time="2025-05-10T09:54:19.079671618Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 10 09:54:19.079963 containerd[1557]: time="2025-05-10T09:54:19.079942935Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 10 09:54:19.080043 containerd[1557]: time="2025-05-10T09:54:19.080028404Z" level=info msg="metadata content store policy set" policy=shared May 10 09:54:19.086833 containerd[1557]: time="2025-05-10T09:54:19.086790901Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 10 09:54:19.086882 containerd[1557]: time="2025-05-10T09:54:19.086842223Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 10 09:54:19.086882 containerd[1557]: time="2025-05-10T09:54:19.086860453Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 10 09:54:19.086882 containerd[1557]: time="2025-05-10T09:54:19.086875224Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 10 09:54:19.086961 containerd[1557]: time="2025-05-10T09:54:19.086890162Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 10 09:54:19.086961 containerd[1557]: time="2025-05-10T09:54:19.086903257Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 10 09:54:19.087029 containerd[1557]: time="2025-05-10T09:54:19.087006075Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 10 09:54:19.087051 containerd[1557]: time="2025-05-10T09:54:19.087041360Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 10 09:54:19.087071 containerd[1557]: time="2025-05-10T09:54:19.087056239Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 10 09:54:19.087093 containerd[1557]: time="2025-05-10T09:54:19.087077048Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 10 09:54:19.087093 containerd[1557]: time="2025-05-10T09:54:19.087088996Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 10 09:54:19.087176 containerd[1557]: time="2025-05-10T09:54:19.087123184Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 10 09:54:19.087335 containerd[1557]: time="2025-05-10T09:54:19.087304572Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 10 09:54:19.087363 containerd[1557]: time="2025-05-10T09:54:19.087333329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 10 09:54:19.087381 containerd[1557]: time="2025-05-10T09:54:19.087358137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 10 09:54:19.087381 containerd[1557]: time="2025-05-10T09:54:19.087373291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 10 09:54:19.087446 containerd[1557]: time="2025-05-10T09:54:19.087387944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 10 09:54:19.087446 containerd[1557]: time="2025-05-10T09:54:19.087401568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 10 09:54:19.087446 containerd[1557]: time="2025-05-10T09:54:19.087415741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 10 09:54:19.087446 containerd[1557]: time="2025-05-10T09:54:19.087428375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 10 09:54:19.087446 containerd[1557]: time="2025-05-10T09:54:19.087442499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 10 09:54:19.087572 containerd[1557]: time="2025-05-10T09:54:19.087455878Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 10 09:54:19.087572 containerd[1557]: time="2025-05-10T09:54:19.087468992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 10 09:54:19.087572 containerd[1557]: time="2025-05-10T09:54:19.087548650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 10 09:54:19.087642 containerd[1557]: time="2025-05-10T09:54:19.087571811Z" level=info msg="Start snapshots syncer" May 10 09:54:19.087642 containerd[1557]: time="2025-05-10T09:54:19.087619760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 10 09:54:19.087999 containerd[1557]: time="2025-05-10T09:54:19.087935821Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 10 09:54:19.088168 containerd[1557]: time="2025-05-10T09:54:19.088003148Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 10 09:54:19.088168 containerd[1557]: time="2025-05-10T09:54:19.088111690Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 10 09:54:19.088307 containerd[1557]: time="2025-05-10T09:54:19.088273593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 10 09:54:19.088335 containerd[1557]: time="2025-05-10T09:54:19.088306095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 10 09:54:19.088335 containerd[1557]: time="2025-05-10T09:54:19.088322541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 10 09:54:19.088398 containerd[1557]: time="2025-05-10T09:54:19.088340949Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 10 09:54:19.088398 containerd[1557]: time="2025-05-10T09:54:19.088356249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 10 09:54:19.088398 containerd[1557]: time="2025-05-10T09:54:19.088370549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 10 09:54:19.088398 containerd[1557]: time="2025-05-10T09:54:19.088385027Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 10 09:54:19.088480 containerd[1557]: time="2025-05-10T09:54:19.088417421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 10 09:54:19.088480 containerd[1557]: time="2025-05-10T09:54:19.088431251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 10 09:54:19.088480 containerd[1557]: time="2025-05-10T09:54:19.088450854Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 10 09:54:19.089631 containerd[1557]: time="2025-05-10T09:54:19.089595520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:54:19.089738 containerd[1557]: time="2025-05-10T09:54:19.089710716Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 10 09:54:19.089738 containerd[1557]: time="2025-05-10T09:54:19.089728673Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:54:19.089782 containerd[1557]: time="2025-05-10T09:54:19.089742032Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 10 09:54:19.089782 containerd[1557]: time="2025-05-10T09:54:19.089753756Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 10 09:54:19.089845 containerd[1557]: time="2025-05-10T09:54:19.089789600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 10 09:54:19.089845 containerd[1557]: time="2025-05-10T09:54:19.089805361Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 10 09:54:19.089845 containerd[1557]: time="2025-05-10T09:54:19.089831306Z" level=info msg="runtime interface created" May 10 09:54:19.089845 containerd[1557]: time="2025-05-10T09:54:19.089838657Z" level=info msg="created NRI interface" May 10 09:54:19.089915 containerd[1557]: time="2025-05-10T09:54:19.089849635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 10 09:54:19.089915 containerd[1557]: time="2025-05-10T09:54:19.089862984Z" level=info msg="Connect containerd service" May 10 09:54:19.089915 containerd[1557]: time="2025-05-10T09:54:19.089891761Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 09:54:19.091126 containerd[1557]: time="2025-05-10T09:54:19.091086964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 09:54:19.199165 containerd[1557]: time="2025-05-10T09:54:19.198942699Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 09:54:19.199165 containerd[1557]: time="2025-05-10T09:54:19.199028198Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 09:54:19.199165 containerd[1557]: time="2025-05-10T09:54:19.199061494Z" level=info msg="Start subscribing containerd event" May 10 09:54:19.199165 containerd[1557]: time="2025-05-10T09:54:19.199086400Z" level=info msg="Start recovering state" May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199235511Z" level=info msg="Start event monitor" May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199263984Z" level=info msg="Start cni network conf syncer for default" May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199274178Z" level=info msg="Start streaming server" May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199296467Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199305749Z" level=info msg="runtime interface starting up..." May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199311709Z" level=info msg="starting plugins..." May 10 09:54:19.199357 containerd[1557]: time="2025-05-10T09:54:19.199332733Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 10 09:54:19.199540 tar[1554]: linux-amd64/LICENSE May 10 09:54:19.199540 tar[1554]: linux-amd64/README.md May 10 09:54:19.199646 systemd[1]: Started containerd.service - containerd container runtime. May 10 09:54:19.200505 containerd[1557]: time="2025-05-10T09:54:19.200293560Z" level=info msg="containerd successfully booted in 0.140535s" May 10 09:54:19.221153 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 09:54:19.974996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:19.977246 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 09:54:19.980061 systemd[1]: Startup finished in 3.344s (kernel) + 6.443s (initrd) + 4.862s (userspace) = 14.650s. May 10 09:54:20.010977 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:20.845634 kubelet[1657]: E0510 09:54:20.845552 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:20.850096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:20.850363 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:20.850873 systemd[1]: kubelet.service: Consumed 1.822s CPU time, 237.6M memory peak. May 10 09:54:22.600016 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 09:54:22.601529 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:33562.service - OpenSSH per-connection server daemon (10.0.0.1:33562). May 10 09:54:22.670371 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 33562 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:22.673188 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:22.680813 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 09:54:22.682164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 09:54:22.689693 systemd-logind[1541]: New session 1 of user core. May 10 09:54:22.709623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 09:54:22.713334 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 09:54:22.734102 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 09:54:22.737038 systemd-logind[1541]: New session c1 of user core. May 10 09:54:22.890814 systemd[1674]: Queued start job for default target default.target. May 10 09:54:22.902104 systemd[1674]: Created slice app.slice - User Application Slice. May 10 09:54:22.902170 systemd[1674]: Reached target paths.target - Paths. May 10 09:54:22.902244 systemd[1674]: Reached target timers.target - Timers. May 10 09:54:22.904491 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 09:54:22.918253 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 09:54:22.918440 systemd[1674]: Reached target sockets.target - Sockets. May 10 09:54:22.918506 systemd[1674]: Reached target basic.target - Basic System. May 10 09:54:22.918569 systemd[1674]: Reached target default.target - Main User Target. May 10 09:54:22.918624 systemd[1674]: Startup finished in 173ms. May 10 09:54:22.919270 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 09:54:22.921550 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 09:54:22.986396 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:33566.service - OpenSSH per-connection server daemon (10.0.0.1:33566). May 10 09:54:23.038541 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 33566 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.040839 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.046527 systemd-logind[1541]: New session 2 of user core. May 10 09:54:23.057375 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 09:54:23.112687 sshd[1687]: Connection closed by 10.0.0.1 port 33566 May 10 09:54:23.113078 sshd-session[1685]: pam_unix(sshd:session): session closed for user core May 10 09:54:23.131603 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:33566.service: Deactivated successfully. May 10 09:54:23.133785 systemd[1]: session-2.scope: Deactivated successfully. May 10 09:54:23.135688 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. May 10 09:54:23.137200 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:33574.service - OpenSSH per-connection server daemon (10.0.0.1:33574). May 10 09:54:23.138535 systemd-logind[1541]: Removed session 2. May 10 09:54:23.195509 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 33574 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.197274 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.202830 systemd-logind[1541]: New session 3 of user core. May 10 09:54:23.212281 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 09:54:23.264897 sshd[1695]: Connection closed by 10.0.0.1 port 33574 May 10 09:54:23.265391 sshd-session[1692]: pam_unix(sshd:session): session closed for user core May 10 09:54:23.280158 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:33574.service: Deactivated successfully. May 10 09:54:23.282903 systemd[1]: session-3.scope: Deactivated successfully. May 10 09:54:23.285221 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. May 10 09:54:23.286888 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:33588.service - OpenSSH per-connection server daemon (10.0.0.1:33588). May 10 09:54:23.287946 systemd-logind[1541]: Removed session 3. May 10 09:54:23.350778 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 33588 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.352808 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.358072 systemd-logind[1541]: New session 4 of user core. May 10 09:54:23.365362 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 09:54:23.419775 sshd[1703]: Connection closed by 10.0.0.1 port 33588 May 10 09:54:23.420107 sshd-session[1700]: pam_unix(sshd:session): session closed for user core May 10 09:54:23.438418 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:33588.service: Deactivated successfully. May 10 09:54:23.441219 systemd[1]: session-4.scope: Deactivated successfully. May 10 09:54:23.443608 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. May 10 09:54:23.445178 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:33592.service - OpenSSH per-connection server daemon (10.0.0.1:33592). May 10 09:54:23.446264 systemd-logind[1541]: Removed session 4. May 10 09:54:23.497101 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.498895 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.504233 systemd-logind[1541]: New session 5 of user core. May 10 09:54:23.515276 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 09:54:23.576832 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 09:54:23.577208 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:23.594861 sudo[1712]: pam_unix(sudo:session): session closed for user root May 10 09:54:23.597311 sshd[1711]: Connection closed by 10.0.0.1 port 33592 May 10 09:54:23.597805 sshd-session[1708]: pam_unix(sshd:session): session closed for user core May 10 09:54:23.617605 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:33592.service: Deactivated successfully. May 10 09:54:23.619631 systemd[1]: session-5.scope: Deactivated successfully. May 10 09:54:23.621566 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. May 10 09:54:23.623631 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:33602.service - OpenSSH per-connection server daemon (10.0.0.1:33602). May 10 09:54:23.624727 systemd-logind[1541]: Removed session 5. May 10 09:54:23.679592 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 33602 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.681187 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.685996 systemd-logind[1541]: New session 6 of user core. May 10 09:54:23.696302 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 09:54:23.752064 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 09:54:23.752474 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:23.758725 sudo[1722]: pam_unix(sudo:session): session closed for user root May 10 09:54:23.765624 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 10 09:54:23.765968 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:23.776932 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 09:54:23.825713 augenrules[1744]: No rules May 10 09:54:23.828227 systemd[1]: audit-rules.service: Deactivated successfully. May 10 09:54:23.828639 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 09:54:23.829909 sudo[1721]: pam_unix(sudo:session): session closed for user root May 10 09:54:23.831560 sshd[1720]: Connection closed by 10.0.0.1 port 33602 May 10 09:54:23.831924 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 10 09:54:23.841504 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:33602.service: Deactivated successfully. May 10 09:54:23.843694 systemd[1]: session-6.scope: Deactivated successfully. May 10 09:54:23.845532 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. May 10 09:54:23.847083 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:33614.service - OpenSSH per-connection server daemon (10.0.0.1:33614). May 10 09:54:23.848363 systemd-logind[1541]: Removed session 6. May 10 09:54:23.900710 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 33614 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:54:23.902303 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:54:23.906933 systemd-logind[1541]: New session 7 of user core. May 10 09:54:23.914276 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 09:54:23.966917 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 09:54:23.967287 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 09:54:24.828902 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 09:54:24.842526 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 09:54:25.279194 dockerd[1776]: time="2025-05-10T09:54:25.279005208Z" level=info msg="Starting up" May 10 09:54:25.281538 dockerd[1776]: time="2025-05-10T09:54:25.281504383Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 10 09:54:25.600399 dockerd[1776]: time="2025-05-10T09:54:25.600235959Z" level=info msg="Loading containers: start." May 10 09:54:25.612173 kernel: Initializing XFRM netlink socket May 10 09:54:26.073388 systemd-networkd[1438]: docker0: Link UP May 10 09:54:26.263760 dockerd[1776]: time="2025-05-10T09:54:26.263674193Z" level=info msg="Loading containers: done." May 10 09:54:26.289043 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2862859588-merged.mount: Deactivated successfully. May 10 09:54:26.348460 dockerd[1776]: time="2025-05-10T09:54:26.348210608Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 09:54:26.348460 dockerd[1776]: time="2025-05-10T09:54:26.348442465Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 10 09:54:26.349012 dockerd[1776]: time="2025-05-10T09:54:26.348639704Z" level=info msg="Initializing buildkit" May 10 09:54:26.386205 dockerd[1776]: time="2025-05-10T09:54:26.386152770Z" level=info msg="Completed buildkit initialization" May 10 09:54:26.391233 dockerd[1776]: time="2025-05-10T09:54:26.391188976Z" level=info msg="Daemon has completed initialization" May 10 09:54:26.391380 dockerd[1776]: time="2025-05-10T09:54:26.391279411Z" level=info msg="API listen on /run/docker.sock" May 10 09:54:26.391519 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 09:54:27.504980 containerd[1557]: time="2025-05-10T09:54:27.504926366Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 09:54:28.052480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012768154.mount: Deactivated successfully. May 10 09:54:29.774772 containerd[1557]: time="2025-05-10T09:54:29.774680897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:29.781457 containerd[1557]: time="2025-05-10T09:54:29.781391337Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 10 09:54:29.823898 containerd[1557]: time="2025-05-10T09:54:29.823828135Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:29.835807 containerd[1557]: time="2025-05-10T09:54:29.835763198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:29.837390 containerd[1557]: time="2025-05-10T09:54:29.837270015Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.332288974s" May 10 09:54:29.837390 containerd[1557]: time="2025-05-10T09:54:29.837350665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 10 09:54:29.841440 containerd[1557]: time="2025-05-10T09:54:29.841390638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 09:54:31.146121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 09:54:31.148378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:31.398911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:31.415784 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:31.567686 kubelet[2052]: E0510 09:54:31.567544 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:31.575884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:31.576161 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:31.576593 systemd[1]: kubelet.service: Consumed 339ms CPU time, 96.2M memory peak. May 10 09:54:32.009708 containerd[1557]: time="2025-05-10T09:54:32.009620523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:32.010797 containerd[1557]: time="2025-05-10T09:54:32.010709116Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 10 09:54:32.012318 containerd[1557]: time="2025-05-10T09:54:32.012270207Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:32.015255 containerd[1557]: time="2025-05-10T09:54:32.015205043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:32.016282 containerd[1557]: time="2025-05-10T09:54:32.016230161Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.17478108s" May 10 09:54:32.016282 containerd[1557]: time="2025-05-10T09:54:32.016279354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 10 09:54:32.016881 containerd[1557]: time="2025-05-10T09:54:32.016859502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 09:54:33.629695 containerd[1557]: time="2025-05-10T09:54:33.629610281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:33.630890 containerd[1557]: time="2025-05-10T09:54:33.630849515Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 10 09:54:33.632493 containerd[1557]: time="2025-05-10T09:54:33.632458035Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:33.635312 containerd[1557]: time="2025-05-10T09:54:33.635248914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:33.636210 containerd[1557]: time="2025-05-10T09:54:33.636174261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.619286342s" May 10 09:54:33.636210 containerd[1557]: time="2025-05-10T09:54:33.636206034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 10 09:54:33.636801 containerd[1557]: time="2025-05-10T09:54:33.636771181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 09:54:34.954848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175061886.mount: Deactivated successfully. May 10 09:54:35.501013 containerd[1557]: time="2025-05-10T09:54:35.500959992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:35.501776 containerd[1557]: time="2025-05-10T09:54:35.501740416Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 10 09:54:35.502908 containerd[1557]: time="2025-05-10T09:54:35.502858667Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:35.504887 containerd[1557]: time="2025-05-10T09:54:35.504855767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:35.505326 containerd[1557]: time="2025-05-10T09:54:35.505298912Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.868489801s" May 10 09:54:35.505326 containerd[1557]: time="2025-05-10T09:54:35.505325964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 10 09:54:35.505747 containerd[1557]: time="2025-05-10T09:54:35.505702906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 09:54:36.091693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168677425.mount: Deactivated successfully. May 10 09:54:37.847010 containerd[1557]: time="2025-05-10T09:54:37.846906233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:37.878556 containerd[1557]: time="2025-05-10T09:54:37.878476325Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 10 09:54:37.897156 containerd[1557]: time="2025-05-10T09:54:37.897075318Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:37.921858 containerd[1557]: time="2025-05-10T09:54:37.921782764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:37.923053 containerd[1557]: time="2025-05-10T09:54:37.923012170Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.417283174s" May 10 09:54:37.923125 containerd[1557]: time="2025-05-10T09:54:37.923060865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 09:54:37.923766 containerd[1557]: time="2025-05-10T09:54:37.923596690Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 09:54:38.446031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141946497.mount: Deactivated successfully. May 10 09:54:38.451581 containerd[1557]: time="2025-05-10T09:54:38.451526118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.452294 containerd[1557]: time="2025-05-10T09:54:38.452265167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 10 09:54:38.453417 containerd[1557]: time="2025-05-10T09:54:38.453377005Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.455646 containerd[1557]: time="2025-05-10T09:54:38.455606964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 09:54:38.456198 containerd[1557]: time="2025-05-10T09:54:38.456164163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 532.532644ms" May 10 09:54:38.456259 containerd[1557]: time="2025-05-10T09:54:38.456198188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 09:54:38.456740 containerd[1557]: time="2025-05-10T09:54:38.456715191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 09:54:39.824241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775660860.mount: Deactivated successfully. May 10 09:54:41.826678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 09:54:41.828631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:42.198391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:42.218486 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 09:54:42.342311 kubelet[2181]: E0510 09:54:42.342227 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 09:54:42.346860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 09:54:42.347080 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 09:54:42.347485 systemd[1]: kubelet.service: Consumed 279ms CPU time, 94M memory peak. May 10 09:54:42.570385 containerd[1557]: time="2025-05-10T09:54:42.570201453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:42.572576 containerd[1557]: time="2025-05-10T09:54:42.572539710Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 10 09:54:42.574491 containerd[1557]: time="2025-05-10T09:54:42.574442438Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:42.577568 containerd[1557]: time="2025-05-10T09:54:42.577499580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:54:42.578665 containerd[1557]: time="2025-05-10T09:54:42.578629834Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.121885573s" May 10 09:54:42.578730 containerd[1557]: time="2025-05-10T09:54:42.578664134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 10 09:54:45.045629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:45.045797 systemd[1]: kubelet.service: Consumed 279ms CPU time, 94M memory peak. May 10 09:54:45.048264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:45.082225 systemd[1]: Reload requested from client PID 2219 ('systemctl') (unit session-7.scope)... May 10 09:54:45.082250 systemd[1]: Reloading... May 10 09:54:45.191179 zram_generator::config[2262]: No configuration found. May 10 09:54:45.396743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:45.535616 systemd[1]: Reloading finished in 452 ms. May 10 09:54:45.593746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 09:54:45.593849 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 09:54:45.594169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:45.594214 systemd[1]: kubelet.service: Consumed 146ms CPU time, 83.6M memory peak. May 10 09:54:45.596975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:45.790356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:45.807712 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 09:54:45.875167 kubelet[2310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:45.875167 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 09:54:45.875167 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:45.875637 kubelet[2310]: I0510 09:54:45.875214 2310 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 09:54:46.195779 kubelet[2310]: I0510 09:54:46.195624 2310 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 09:54:46.195779 kubelet[2310]: I0510 09:54:46.195664 2310 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 09:54:46.196008 kubelet[2310]: I0510 09:54:46.195940 2310 server.go:929] "Client rotation is on, will bootstrap in background" May 10 09:54:46.222030 kubelet[2310]: I0510 09:54:46.221977 2310 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 09:54:46.224009 kubelet[2310]: E0510 09:54:46.223973 2310 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:46.232054 kubelet[2310]: I0510 09:54:46.232026 2310 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 10 09:54:46.238895 kubelet[2310]: I0510 09:54:46.238843 2310 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 09:54:46.240552 kubelet[2310]: I0510 09:54:46.240509 2310 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 09:54:46.240733 kubelet[2310]: I0510 09:54:46.240686 2310 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 09:54:46.240912 kubelet[2310]: I0510 09:54:46.240713 2310 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 09:54:46.241119 kubelet[2310]: I0510 09:54:46.240920 2310 topology_manager.go:138] "Creating topology manager with none policy" May 10 09:54:46.241119 kubelet[2310]: I0510 09:54:46.240929 2310 container_manager_linux.go:300] "Creating device plugin manager" May 10 09:54:46.241119 kubelet[2310]: I0510 09:54:46.241056 2310 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:46.242641 kubelet[2310]: I0510 09:54:46.242603 2310 kubelet.go:408] "Attempting to sync node with API server" May 10 09:54:46.242641 kubelet[2310]: I0510 09:54:46.242627 2310 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 09:54:46.242729 kubelet[2310]: I0510 09:54:46.242680 2310 kubelet.go:314] "Adding apiserver pod source" May 10 09:54:46.242729 kubelet[2310]: I0510 09:54:46.242710 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 09:54:46.246256 kubelet[2310]: W0510 09:54:46.245946 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:46.246256 kubelet[2310]: E0510 09:54:46.246041 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:46.246711 kubelet[2310]: W0510 09:54:46.246650 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:46.246759 kubelet[2310]: E0510 09:54:46.246729 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:46.249667 kubelet[2310]: I0510 09:54:46.249578 2310 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 09:54:46.251777 kubelet[2310]: I0510 09:54:46.251755 2310 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 09:54:46.252428 kubelet[2310]: W0510 09:54:46.252391 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 09:54:46.253113 kubelet[2310]: I0510 09:54:46.253078 2310 server.go:1269] "Started kubelet" May 10 09:54:46.253214 kubelet[2310]: I0510 09:54:46.253170 2310 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 09:54:46.254181 kubelet[2310]: I0510 09:54:46.254133 2310 server.go:460] "Adding debug handlers to kubelet server" May 10 09:54:46.259370 kubelet[2310]: I0510 09:54:46.259300 2310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 09:54:46.259501 kubelet[2310]: I0510 09:54:46.259459 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 09:54:46.262201 kubelet[2310]: I0510 09:54:46.259576 2310 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 09:54:46.262201 kubelet[2310]: I0510 09:54:46.260230 2310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 09:54:46.262358 kubelet[2310]: I0510 09:54:46.262326 2310 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 09:54:46.262940 kubelet[2310]: E0510 09:54:46.262553 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:46.263015 kubelet[2310]: I0510 09:54:46.262951 2310 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 09:54:46.263072 kubelet[2310]: I0510 09:54:46.263048 2310 reconciler.go:26] "Reconciler: start to sync state" May 10 09:54:46.263376 kubelet[2310]: E0510 09:54:46.263297 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" May 10 09:54:46.263538 kubelet[2310]: W0510 09:54:46.263483 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:46.263600 kubelet[2310]: E0510 09:54:46.263563 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:46.264008 kubelet[2310]: I0510 09:54:46.263986 2310 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 09:54:46.266272 kubelet[2310]: E0510 09:54:46.266241 2310 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 09:54:46.266272 kubelet[2310]: I0510 09:54:46.266249 2310 factory.go:221] Registration of the containerd container factory successfully May 10 09:54:46.266364 kubelet[2310]: I0510 09:54:46.266291 2310 factory.go:221] Registration of the systemd container factory successfully May 10 09:54:46.266860 kubelet[2310]: E0510 09:54:46.264107 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e21d0d3957f7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 09:54:46.253035388 +0000 UTC m=+0.440712580,LastTimestamp:2025-05-10 09:54:46.253035388 +0000 UTC m=+0.440712580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 09:54:46.279710 kubelet[2310]: I0510 09:54:46.279680 2310 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 09:54:46.279710 kubelet[2310]: I0510 09:54:46.279699 2310 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 09:54:46.279858 kubelet[2310]: I0510 09:54:46.279735 2310 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:46.363060 kubelet[2310]: E0510 09:54:46.363011 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:46.463987 kubelet[2310]: E0510 09:54:46.463693 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:46.465111 kubelet[2310]: E0510 09:54:46.464995 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" May 10 09:54:46.554883 kubelet[2310]: I0510 09:54:46.554782 2310 policy_none.go:49] "None policy: Start" May 10 09:54:46.557319 kubelet[2310]: I0510 09:54:46.556709 2310 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 09:54:46.557319 kubelet[2310]: I0510 09:54:46.556748 2310 state_mem.go:35] "Initializing new in-memory state store" May 10 09:54:46.558395 kubelet[2310]: I0510 09:54:46.558313 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 09:54:46.559899 kubelet[2310]: I0510 09:54:46.559870 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 09:54:46.559991 kubelet[2310]: I0510 09:54:46.559934 2310 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 09:54:46.559991 kubelet[2310]: I0510 09:54:46.559985 2310 kubelet.go:2321] "Starting kubelet main sync loop" May 10 09:54:46.560081 kubelet[2310]: E0510 09:54:46.560050 2310 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 09:54:46.560716 kubelet[2310]: W0510 09:54:46.560638 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:46.560716 kubelet[2310]: E0510 09:54:46.560689 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:46.564205 kubelet[2310]: E0510 09:54:46.564171 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:46.573210 kubelet[2310]: E0510 09:54:46.573014 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e21d0d3957f7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 09:54:46.253035388 +0000 UTC m=+0.440712580,LastTimestamp:2025-05-10 09:54:46.253035388 +0000 UTC m=+0.440712580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 09:54:46.574163 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 09:54:46.586021 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 09:54:46.589453 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 09:54:46.601363 kubelet[2310]: I0510 09:54:46.601314 2310 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 09:54:46.601671 kubelet[2310]: I0510 09:54:46.601644 2310 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 09:54:46.602149 kubelet[2310]: I0510 09:54:46.601671 2310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 09:54:46.602149 kubelet[2310]: I0510 09:54:46.601975 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 09:54:46.603713 kubelet[2310]: E0510 09:54:46.603682 2310 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 10 09:54:46.666808 kubelet[2310]: I0510 09:54:46.666767 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:46.678351 systemd[1]: Created slice kubepods-burstable-podd3c27eb593e99a63ea252374aa151f81.slice - libcontainer container kubepods-burstable-podd3c27eb593e99a63ea252374aa151f81.slice. May 10 09:54:46.694426 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 10 09:54:46.697982 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 10 09:54:46.703901 kubelet[2310]: I0510 09:54:46.703871 2310 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 09:54:46.704313 kubelet[2310]: E0510 09:54:46.704275 2310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 10 09:54:46.767768 kubelet[2310]: I0510 09:54:46.767111 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:46.767768 kubelet[2310]: I0510 09:54:46.767160 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:46.767768 kubelet[2310]: I0510 09:54:46.767185 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:46.767768 kubelet[2310]: I0510 09:54:46.767207 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 10 09:54:46.767768 kubelet[2310]: I0510 09:54:46.767224 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:46.767955 kubelet[2310]: I0510 09:54:46.767239 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:46.767955 kubelet[2310]: I0510 09:54:46.767252 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:46.767955 kubelet[2310]: I0510 09:54:46.767267 2310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:46.865921 kubelet[2310]: E0510 09:54:46.865853 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" May 10 09:54:46.906514 kubelet[2310]: I0510 09:54:46.906470 2310 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 09:54:46.906921 kubelet[2310]: E0510 09:54:46.906871 2310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 10 09:54:46.992274 containerd[1557]: time="2025-05-10T09:54:46.992234222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d3c27eb593e99a63ea252374aa151f81,Namespace:kube-system,Attempt:0,}" May 10 09:54:46.997589 containerd[1557]: time="2025-05-10T09:54:46.997554479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 10 09:54:47.001397 containerd[1557]: time="2025-05-10T09:54:47.001354178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 10 09:54:47.308454 kubelet[2310]: I0510 09:54:47.308405 2310 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 09:54:47.308940 kubelet[2310]: E0510 09:54:47.308885 2310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 10 09:54:47.317443 kubelet[2310]: W0510 09:54:47.317380 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:47.317539 kubelet[2310]: E0510 09:54:47.317440 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:47.386400 containerd[1557]: time="2025-05-10T09:54:47.386347697Z" level=info msg="connecting to shim ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed" address="unix:///run/containerd/s/bbe0326de7298826e22cfab3afda62929b20a3cb022411ea7c7f2f359e66cbca" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:47.387356 containerd[1557]: time="2025-05-10T09:54:47.387331956Z" level=info msg="connecting to shim 3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60" address="unix:///run/containerd/s/86854a2118378dcf19c61726c52a75c1d7502d8266cd0b711e7d1552e0594416" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:47.388833 containerd[1557]: time="2025-05-10T09:54:47.388794355Z" level=info msg="connecting to shim a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c" address="unix:///run/containerd/s/25d72baa871e8b161954fdb8b8bd3ae9bada928d0d1f482e4b9aa33d6626d519" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:47.417339 systemd[1]: Started cri-containerd-3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60.scope - libcontainer container 3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60. May 10 09:54:47.419292 systemd[1]: Started cri-containerd-a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c.scope - libcontainer container a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c. May 10 09:54:47.421979 systemd[1]: Started cri-containerd-ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed.scope - libcontainer container ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed. May 10 09:54:47.448916 kubelet[2310]: W0510 09:54:47.448875 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:47.449069 kubelet[2310]: E0510 09:54:47.449048 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:47.470829 containerd[1557]: time="2025-05-10T09:54:47.470669021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d3c27eb593e99a63ea252374aa151f81,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed\"" May 10 09:54:47.475392 kubelet[2310]: W0510 09:54:47.475103 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:47.475511 kubelet[2310]: E0510 09:54:47.475405 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:47.476099 containerd[1557]: time="2025-05-10T09:54:47.476060776Z" level=info msg="CreateContainer within sandbox \"ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 09:54:47.483956 containerd[1557]: time="2025-05-10T09:54:47.483921520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c\"" May 10 09:54:47.486097 containerd[1557]: time="2025-05-10T09:54:47.486058377Z" level=info msg="CreateContainer within sandbox \"a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 09:54:47.487080 containerd[1557]: time="2025-05-10T09:54:47.487053721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60\"" May 10 09:54:47.491224 containerd[1557]: time="2025-05-10T09:54:47.489063255Z" level=info msg="CreateContainer within sandbox \"3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 09:54:47.491224 containerd[1557]: time="2025-05-10T09:54:47.489233887Z" level=info msg="Container 3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:47.503768 containerd[1557]: time="2025-05-10T09:54:47.503716229Z" level=info msg="CreateContainer within sandbox \"ba75a14006a037acc1e45cc6af5e4923f7e0f45190417ae65b5abf54c609d2ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171\"" May 10 09:54:47.504418 containerd[1557]: time="2025-05-10T09:54:47.504382015Z" level=info msg="StartContainer for \"3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171\"" May 10 09:54:47.504877 containerd[1557]: time="2025-05-10T09:54:47.504845786Z" level=info msg="Container d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:47.505682 containerd[1557]: time="2025-05-10T09:54:47.505568429Z" level=info msg="connecting to shim 3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171" address="unix:///run/containerd/s/bbe0326de7298826e22cfab3afda62929b20a3cb022411ea7c7f2f359e66cbca" protocol=ttrpc version=3 May 10 09:54:47.507676 containerd[1557]: time="2025-05-10T09:54:47.507637044Z" level=info msg="Container 59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:47.512239 containerd[1557]: time="2025-05-10T09:54:47.512194974Z" level=info msg="CreateContainer within sandbox \"a501574d3255a867a10fb01d7ad616d5f0fa06a3e46855ffc1f8957f0c4c4b0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274\"" May 10 09:54:47.512581 containerd[1557]: time="2025-05-10T09:54:47.512543128Z" level=info msg="StartContainer for \"d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274\"" May 10 09:54:47.513719 containerd[1557]: time="2025-05-10T09:54:47.513686984Z" level=info msg="connecting to shim d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274" address="unix:///run/containerd/s/25d72baa871e8b161954fdb8b8bd3ae9bada928d0d1f482e4b9aa33d6626d519" protocol=ttrpc version=3 May 10 09:54:47.515018 containerd[1557]: time="2025-05-10T09:54:47.514894327Z" level=info msg="CreateContainer within sandbox \"3b203cadb9331e67475a2051e7dfe814735725367a71ffdff4d148e1bf6fef60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887\"" May 10 09:54:47.515480 containerd[1557]: time="2025-05-10T09:54:47.515453567Z" level=info msg="StartContainer for \"59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887\"" May 10 09:54:47.517397 containerd[1557]: time="2025-05-10T09:54:47.516897031Z" level=info msg="connecting to shim 59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887" address="unix:///run/containerd/s/86854a2118378dcf19c61726c52a75c1d7502d8266cd0b711e7d1552e0594416" protocol=ttrpc version=3 May 10 09:54:47.527293 systemd[1]: Started cri-containerd-3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171.scope - libcontainer container 3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171. May 10 09:54:47.530787 systemd[1]: Started cri-containerd-d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274.scope - libcontainer container d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274. May 10 09:54:47.539115 systemd[1]: Started cri-containerd-59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887.scope - libcontainer container 59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887. May 10 09:54:47.592318 containerd[1557]: time="2025-05-10T09:54:47.591797048Z" level=info msg="StartContainer for \"3e579d7fb4a6b2accc8759a245e6cdcfb7fdeac3ce7f34e3d2d6925c231a5171\" returns successfully" May 10 09:54:47.593361 containerd[1557]: time="2025-05-10T09:54:47.593311097Z" level=info msg="StartContainer for \"d5305e7cb6c581938cfff3477cc613373814daadd6f124f45cda82264e63a274\" returns successfully" May 10 09:54:47.600416 containerd[1557]: time="2025-05-10T09:54:47.600369530Z" level=info msg="StartContainer for \"59282128d05378256ead4ccf1a01815b8cdbc2c41e4ff825d9623ef32157e887\" returns successfully" May 10 09:54:47.630876 kubelet[2310]: W0510 09:54:47.630780 2310 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 10 09:54:47.630876 kubelet[2310]: E0510 09:54:47.630860 2310 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 10 09:54:48.111386 kubelet[2310]: I0510 09:54:48.111345 2310 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 09:54:48.657350 kubelet[2310]: E0510 09:54:48.657291 2310 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 10 09:54:48.743835 kubelet[2310]: I0510 09:54:48.743789 2310 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 10 09:54:48.743835 kubelet[2310]: E0510 09:54:48.743831 2310 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 10 09:54:48.750485 kubelet[2310]: E0510 09:54:48.750455 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:48.851200 kubelet[2310]: E0510 09:54:48.851115 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:48.951931 kubelet[2310]: E0510 09:54:48.951753 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.052454 kubelet[2310]: E0510 09:54:49.052371 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.153176 kubelet[2310]: E0510 09:54:49.153092 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.253792 kubelet[2310]: E0510 09:54:49.253627 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.354362 kubelet[2310]: E0510 09:54:49.354289 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.454960 kubelet[2310]: E0510 09:54:49.454905 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:49.555552 kubelet[2310]: E0510 09:54:49.555433 2310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:50.244913 kubelet[2310]: I0510 09:54:50.244840 2310 apiserver.go:52] "Watching apiserver" May 10 09:54:50.263570 kubelet[2310]: I0510 09:54:50.263512 2310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 09:54:50.774930 systemd[1]: Reload requested from client PID 2583 ('systemctl') (unit session-7.scope)... May 10 09:54:50.774949 systemd[1]: Reloading... May 10 09:54:50.873168 zram_generator::config[2629]: No configuration found. May 10 09:54:50.969073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 09:54:51.107208 systemd[1]: Reloading finished in 331 ms. May 10 09:54:51.132351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:51.152580 systemd[1]: kubelet.service: Deactivated successfully. May 10 09:54:51.152918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:51.152972 systemd[1]: kubelet.service: Consumed 974ms CPU time, 118.7M memory peak. May 10 09:54:51.155178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 09:54:51.359435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 09:54:51.371606 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 09:54:51.412958 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:51.412958 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 09:54:51.412958 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 09:54:51.413424 kubelet[2671]: I0510 09:54:51.413018 2671 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 09:54:51.422962 kubelet[2671]: I0510 09:54:51.422914 2671 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 09:54:51.422962 kubelet[2671]: I0510 09:54:51.422948 2671 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 09:54:51.423267 kubelet[2671]: I0510 09:54:51.423247 2671 server.go:929] "Client rotation is on, will bootstrap in background" May 10 09:54:51.424600 kubelet[2671]: I0510 09:54:51.424569 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 09:54:51.426583 kubelet[2671]: I0510 09:54:51.426504 2671 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 09:54:51.430064 kubelet[2671]: I0510 09:54:51.430031 2671 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 10 09:54:51.435175 kubelet[2671]: I0510 09:54:51.435133 2671 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 09:54:51.435347 kubelet[2671]: I0510 09:54:51.435331 2671 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 09:54:51.435547 kubelet[2671]: I0510 09:54:51.435492 2671 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 09:54:51.436027 kubelet[2671]: I0510 09:54:51.435541 2671 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 09:54:51.436129 kubelet[2671]: I0510 09:54:51.436031 2671 topology_manager.go:138] "Creating topology manager with none policy" May 10 09:54:51.436129 kubelet[2671]: I0510 09:54:51.436039 2671 container_manager_linux.go:300] "Creating device plugin manager" May 10 09:54:51.436129 kubelet[2671]: I0510 09:54:51.436082 2671 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:51.436246 kubelet[2671]: I0510 09:54:51.436227 2671 kubelet.go:408] "Attempting to sync node with API server" May 10 09:54:51.436246 kubelet[2671]: I0510 09:54:51.436241 2671 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 09:54:51.436301 kubelet[2671]: I0510 09:54:51.436272 2671 kubelet.go:314] "Adding apiserver pod source" May 10 09:54:51.436301 kubelet[2671]: I0510 09:54:51.436287 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 09:54:51.439171 kubelet[2671]: I0510 09:54:51.437657 2671 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 10 09:54:51.439171 kubelet[2671]: I0510 09:54:51.438041 2671 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 09:54:51.439171 kubelet[2671]: I0510 09:54:51.438489 2671 server.go:1269] "Started kubelet" May 10 09:54:51.440770 kubelet[2671]: I0510 09:54:51.440743 2671 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 09:54:51.442328 kubelet[2671]: I0510 09:54:51.442298 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 09:54:51.442608 kubelet[2671]: I0510 09:54:51.442592 2671 server.go:460] "Adding debug handlers to kubelet server" May 10 09:54:51.444094 kubelet[2671]: I0510 09:54:51.444014 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 09:54:51.444596 kubelet[2671]: I0510 09:54:51.444541 2671 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 09:54:51.445922 kubelet[2671]: I0510 09:54:51.445888 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 09:54:51.447650 kubelet[2671]: I0510 09:54:51.447081 2671 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 09:54:51.447650 kubelet[2671]: E0510 09:54:51.447215 2671 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 09:54:51.450933 kubelet[2671]: I0510 09:54:51.450389 2671 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 09:54:51.450933 kubelet[2671]: I0510 09:54:51.450580 2671 factory.go:221] Registration of the systemd container factory successfully May 10 09:54:51.450933 kubelet[2671]: I0510 09:54:51.450598 2671 reconciler.go:26] "Reconciler: start to sync state" May 10 09:54:51.450933 kubelet[2671]: I0510 09:54:51.450680 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 09:54:51.452596 kubelet[2671]: I0510 09:54:51.452574 2671 factory.go:221] Registration of the containerd container factory successfully May 10 09:54:51.457945 kubelet[2671]: I0510 09:54:51.457912 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 09:54:51.459183 kubelet[2671]: I0510 09:54:51.459153 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 09:54:51.459242 kubelet[2671]: I0510 09:54:51.459186 2671 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 09:54:51.459242 kubelet[2671]: I0510 09:54:51.459206 2671 kubelet.go:2321] "Starting kubelet main sync loop" May 10 09:54:51.459287 kubelet[2671]: E0510 09:54:51.459241 2671 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 09:54:51.496287 kubelet[2671]: I0510 09:54:51.496166 2671 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 09:54:51.496287 kubelet[2671]: I0510 09:54:51.496193 2671 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 09:54:51.496287 kubelet[2671]: I0510 09:54:51.496213 2671 state_mem.go:36] "Initialized new in-memory state store" May 10 09:54:51.496500 kubelet[2671]: I0510 09:54:51.496371 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 09:54:51.496500 kubelet[2671]: I0510 09:54:51.496382 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 09:54:51.496500 kubelet[2671]: I0510 09:54:51.496406 2671 policy_none.go:49] "None policy: Start" May 10 09:54:51.496932 kubelet[2671]: I0510 09:54:51.496916 2671 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 09:54:51.496977 kubelet[2671]: I0510 09:54:51.496941 2671 state_mem.go:35] "Initializing new in-memory state store" May 10 09:54:51.497148 kubelet[2671]: I0510 09:54:51.497099 2671 state_mem.go:75] "Updated machine memory state" May 10 09:54:51.502250 kubelet[2671]: I0510 09:54:51.502203 2671 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 09:54:51.502467 kubelet[2671]: I0510 09:54:51.502443 2671 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 09:54:51.502523 kubelet[2671]: I0510 09:54:51.502461 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 09:54:51.502759 kubelet[2671]: I0510 09:54:51.502736 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 09:54:51.604613 kubelet[2671]: E0510 09:54:51.604554 2671 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 10 09:54:51.605318 kubelet[2671]: E0510 09:54:51.605287 2671 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 10 09:54:51.605967 kubelet[2671]: I0510 09:54:51.605924 2671 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 09:54:51.624097 kubelet[2671]: I0510 09:54:51.623962 2671 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 10 09:54:51.624097 kubelet[2671]: I0510 09:54:51.624073 2671 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 10 09:54:51.652791 kubelet[2671]: I0510 09:54:51.652742 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:51.652791 kubelet[2671]: I0510 09:54:51.652780 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:51.652960 kubelet[2671]: I0510 09:54:51.652880 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:51.652960 kubelet[2671]: I0510 09:54:51.652944 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:51.653047 kubelet[2671]: I0510 09:54:51.652977 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c27eb593e99a63ea252374aa151f81-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3c27eb593e99a63ea252374aa151f81\") " pod="kube-system/kube-apiserver-localhost" May 10 09:54:51.653047 kubelet[2671]: I0510 09:54:51.653000 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:51.653047 kubelet[2671]: I0510 09:54:51.653032 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:51.653153 kubelet[2671]: I0510 09:54:51.653060 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 09:54:51.653153 kubelet[2671]: I0510 09:54:51.653083 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 10 09:54:52.438049 kubelet[2671]: I0510 09:54:52.437996 2671 apiserver.go:52] "Watching apiserver" May 10 09:54:52.451484 kubelet[2671]: I0510 09:54:52.451421 2671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 09:54:52.843123 kubelet[2671]: I0510 09:54:52.842907 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.842872727 podStartE2EDuration="2.842872727s" podCreationTimestamp="2025-05-10 09:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:52.677041168 +0000 UTC m=+1.300644954" watchObservedRunningTime="2025-05-10 09:54:52.842872727 +0000 UTC m=+1.466476513" May 10 09:54:52.922749 kubelet[2671]: I0510 09:54:52.922659 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.92263863 podStartE2EDuration="1.92263863s" podCreationTimestamp="2025-05-10 09:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:52.843085766 +0000 UTC m=+1.466689552" watchObservedRunningTime="2025-05-10 09:54:52.92263863 +0000 UTC m=+1.546242416" May 10 09:54:53.026389 kubelet[2671]: I0510 09:54:53.025933 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.025905481 podStartE2EDuration="3.025905481s" podCreationTimestamp="2025-05-10 09:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:52.922796433 +0000 UTC m=+1.546400219" watchObservedRunningTime="2025-05-10 09:54:53.025905481 +0000 UTC m=+1.649509267" May 10 09:54:56.465333 sudo[1756]: pam_unix(sudo:session): session closed for user root May 10 09:54:56.466900 sshd[1755]: Connection closed by 10.0.0.1 port 33614 May 10 09:54:56.467580 sshd-session[1752]: pam_unix(sshd:session): session closed for user core May 10 09:54:56.472840 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:33614.service: Deactivated successfully. May 10 09:54:56.475538 systemd[1]: session-7.scope: Deactivated successfully. May 10 09:54:56.475775 systemd[1]: session-7.scope: Consumed 5.335s CPU time, 226.2M memory peak. May 10 09:54:56.477303 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. May 10 09:54:56.478682 systemd-logind[1541]: Removed session 7. May 10 09:54:57.884847 kubelet[2671]: I0510 09:54:57.884806 2671 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 09:54:57.885312 containerd[1557]: time="2025-05-10T09:54:57.885176754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 09:54:57.885574 kubelet[2671]: I0510 09:54:57.885370 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 09:54:58.733849 systemd[1]: Created slice kubepods-besteffort-pod72b40d55_1ca0_480f_9bea_07f3e4a999d8.slice - libcontainer container kubepods-besteffort-pod72b40d55_1ca0_480f_9bea_07f3e4a999d8.slice. May 10 09:54:58.817692 kubelet[2671]: I0510 09:54:58.817624 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49np\" (UniqueName: \"kubernetes.io/projected/72b40d55-1ca0-480f-9bea-07f3e4a999d8-kube-api-access-v49np\") pod \"kube-proxy-hs2zn\" (UID: \"72b40d55-1ca0-480f-9bea-07f3e4a999d8\") " pod="kube-system/kube-proxy-hs2zn" May 10 09:54:58.817692 kubelet[2671]: I0510 09:54:58.817687 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72b40d55-1ca0-480f-9bea-07f3e4a999d8-kube-proxy\") pod \"kube-proxy-hs2zn\" (UID: \"72b40d55-1ca0-480f-9bea-07f3e4a999d8\") " pod="kube-system/kube-proxy-hs2zn" May 10 09:54:58.817929 kubelet[2671]: I0510 09:54:58.817714 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72b40d55-1ca0-480f-9bea-07f3e4a999d8-xtables-lock\") pod \"kube-proxy-hs2zn\" (UID: \"72b40d55-1ca0-480f-9bea-07f3e4a999d8\") " pod="kube-system/kube-proxy-hs2zn" May 10 09:54:58.817929 kubelet[2671]: I0510 09:54:58.817737 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72b40d55-1ca0-480f-9bea-07f3e4a999d8-lib-modules\") pod \"kube-proxy-hs2zn\" (UID: \"72b40d55-1ca0-480f-9bea-07f3e4a999d8\") " pod="kube-system/kube-proxy-hs2zn" May 10 09:54:58.911251 systemd[1]: Created slice kubepods-besteffort-pod2902b933_f7bf_455b_a7e8_5d0e4948cf92.slice - libcontainer container kubepods-besteffort-pod2902b933_f7bf_455b_a7e8_5d0e4948cf92.slice. May 10 09:54:59.019108 kubelet[2671]: I0510 09:54:59.018959 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2902b933-f7bf-455b-a7e8-5d0e4948cf92-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-zkvj6\" (UID: \"2902b933-f7bf-455b-a7e8-5d0e4948cf92\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zkvj6" May 10 09:54:59.019108 kubelet[2671]: I0510 09:54:59.019002 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kplrk\" (UniqueName: \"kubernetes.io/projected/2902b933-f7bf-455b-a7e8-5d0e4948cf92-kube-api-access-kplrk\") pod \"tigera-operator-6f6897fdc5-zkvj6\" (UID: \"2902b933-f7bf-455b-a7e8-5d0e4948cf92\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zkvj6" May 10 09:54:59.046244 containerd[1557]: time="2025-05-10T09:54:59.046180449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hs2zn,Uid:72b40d55-1ca0-480f-9bea-07f3e4a999d8,Namespace:kube-system,Attempt:0,}" May 10 09:54:59.211858 containerd[1557]: time="2025-05-10T09:54:59.211797292Z" level=info msg="connecting to shim 001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd" address="unix:///run/containerd/s/e8fbd42c153b711e22198d0878ea2dabfd89dd68ed0f4bf517052a249c4baeb7" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:59.216428 containerd[1557]: time="2025-05-10T09:54:59.216122017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zkvj6,Uid:2902b933-f7bf-455b-a7e8-5d0e4948cf92,Namespace:tigera-operator,Attempt:0,}" May 10 09:54:59.238018 containerd[1557]: time="2025-05-10T09:54:59.237903284Z" level=info msg="connecting to shim 6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8" address="unix:///run/containerd/s/47b057f3dd3610a1a349af486dd011d19287de650e4f49dfde67b3a3a92fa8b9" namespace=k8s.io protocol=ttrpc version=3 May 10 09:54:59.282344 systemd[1]: Started cri-containerd-001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd.scope - libcontainer container 001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd. May 10 09:54:59.284354 systemd[1]: Started cri-containerd-6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8.scope - libcontainer container 6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8. May 10 09:54:59.316378 containerd[1557]: time="2025-05-10T09:54:59.315701263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hs2zn,Uid:72b40d55-1ca0-480f-9bea-07f3e4a999d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd\"" May 10 09:54:59.318764 containerd[1557]: time="2025-05-10T09:54:59.318720029Z" level=info msg="CreateContainer within sandbox \"001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 09:54:59.334567 containerd[1557]: time="2025-05-10T09:54:59.334510023Z" level=info msg="Container ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382: CDI devices from CRI Config.CDIDevices: []" May 10 09:54:59.335382 containerd[1557]: time="2025-05-10T09:54:59.335338523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zkvj6,Uid:2902b933-f7bf-455b-a7e8-5d0e4948cf92,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8\"" May 10 09:54:59.337154 containerd[1557]: time="2025-05-10T09:54:59.337109740Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 09:54:59.343569 containerd[1557]: time="2025-05-10T09:54:59.343537065Z" level=info msg="CreateContainer within sandbox \"001b8f7750953d56ae196a97fbd1985c1e8ac83d48c092789c915b255f6fb1bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382\"" May 10 09:54:59.344124 containerd[1557]: time="2025-05-10T09:54:59.343959300Z" level=info msg="StartContainer for \"ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382\"" May 10 09:54:59.345450 containerd[1557]: time="2025-05-10T09:54:59.345419333Z" level=info msg="connecting to shim ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382" address="unix:///run/containerd/s/e8fbd42c153b711e22198d0878ea2dabfd89dd68ed0f4bf517052a249c4baeb7" protocol=ttrpc version=3 May 10 09:54:59.365285 systemd[1]: Started cri-containerd-ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382.scope - libcontainer container ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382. May 10 09:54:59.413969 containerd[1557]: time="2025-05-10T09:54:59.413911088Z" level=info msg="StartContainer for \"ebd8d88f2a0921fffd0507872ca1b6cc12516ca176b6cf51ca71e19c14a1d382\" returns successfully" May 10 09:54:59.504234 kubelet[2671]: I0510 09:54:59.504158 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hs2zn" podStartSLOduration=1.504120381 podStartE2EDuration="1.504120381s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:54:59.503926992 +0000 UTC m=+8.127530798" watchObservedRunningTime="2025-05-10 09:54:59.504120381 +0000 UTC m=+8.127724177" May 10 09:55:00.482890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227908107.mount: Deactivated successfully. May 10 09:55:03.332256 update_engine[1545]: I20250510 09:55:03.332180 1545 update_attempter.cc:509] Updating boot flags... May 10 09:55:03.386379 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3028) May 10 09:55:03.453175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3031) May 10 09:55:03.652943 containerd[1557]: time="2025-05-10T09:55:03.652804715Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:03.653849 containerd[1557]: time="2025-05-10T09:55:03.653821818Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 10 09:55:03.655209 containerd[1557]: time="2025-05-10T09:55:03.655172185Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:03.657545 containerd[1557]: time="2025-05-10T09:55:03.657486414Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:03.658082 containerd[1557]: time="2025-05-10T09:55:03.658034516Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 4.320892695s" May 10 09:55:03.658082 containerd[1557]: time="2025-05-10T09:55:03.658069422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 10 09:55:03.660485 containerd[1557]: time="2025-05-10T09:55:03.660433927Z" level=info msg="CreateContainer within sandbox \"6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 09:55:03.670238 containerd[1557]: time="2025-05-10T09:55:03.670171135Z" level=info msg="Container 0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:03.676880 containerd[1557]: time="2025-05-10T09:55:03.676837436Z" level=info msg="CreateContainer within sandbox \"6256df257926e3526e3bd38bf1647693694763a051601db29c450634ffb547b8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8\"" May 10 09:55:03.677561 containerd[1557]: time="2025-05-10T09:55:03.677429261Z" level=info msg="StartContainer for \"0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8\"" May 10 09:55:03.678232 containerd[1557]: time="2025-05-10T09:55:03.678202620Z" level=info msg="connecting to shim 0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8" address="unix:///run/containerd/s/47b057f3dd3610a1a349af486dd011d19287de650e4f49dfde67b3a3a92fa8b9" protocol=ttrpc version=3 May 10 09:55:03.702541 systemd[1]: Started cri-containerd-0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8.scope - libcontainer container 0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8. May 10 09:55:03.787077 containerd[1557]: time="2025-05-10T09:55:03.787023635Z" level=info msg="StartContainer for \"0199ed3a95affe347f64138085921cd14ec934c69c7aa0248be36891290492c8\" returns successfully" May 10 09:55:05.121531 kubelet[2671]: I0510 09:55:05.121451 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-zkvj6" podStartSLOduration=2.799015489 podStartE2EDuration="7.121430791s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="2025-05-10 09:54:59.336534804 +0000 UTC m=+7.960138590" lastFinishedPulling="2025-05-10 09:55:03.658950106 +0000 UTC m=+12.282553892" observedRunningTime="2025-05-10 09:55:04.513473084 +0000 UTC m=+13.137076880" watchObservedRunningTime="2025-05-10 09:55:05.121430791 +0000 UTC m=+13.745034577" May 10 09:55:06.603795 systemd[1]: Created slice kubepods-besteffort-pod9b11f50d_6f7d_429f_93c4_16a484faf094.slice - libcontainer container kubepods-besteffort-pod9b11f50d_6f7d_429f_93c4_16a484faf094.slice. May 10 09:55:06.653455 systemd[1]: Created slice kubepods-besteffort-poda9d17786_9f3c_49a7_8f15_46c09211ae10.slice - libcontainer container kubepods-besteffort-poda9d17786_9f3c_49a7_8f15_46c09211ae10.slice. May 10 09:55:06.667830 kubelet[2671]: I0510 09:55:06.667775 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b11f50d-6f7d-429f-93c4-16a484faf094-tigera-ca-bundle\") pod \"calico-typha-7d557f47b6-lbjkm\" (UID: \"9b11f50d-6f7d-429f-93c4-16a484faf094\") " pod="calico-system/calico-typha-7d557f47b6-lbjkm" May 10 09:55:06.667830 kubelet[2671]: I0510 09:55:06.667826 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b11f50d-6f7d-429f-93c4-16a484faf094-typha-certs\") pod \"calico-typha-7d557f47b6-lbjkm\" (UID: \"9b11f50d-6f7d-429f-93c4-16a484faf094\") " pod="calico-system/calico-typha-7d557f47b6-lbjkm" May 10 09:55:06.668376 kubelet[2671]: I0510 09:55:06.667885 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjd6t\" (UniqueName: \"kubernetes.io/projected/9b11f50d-6f7d-429f-93c4-16a484faf094-kube-api-access-qjd6t\") pod \"calico-typha-7d557f47b6-lbjkm\" (UID: \"9b11f50d-6f7d-429f-93c4-16a484faf094\") " pod="calico-system/calico-typha-7d557f47b6-lbjkm" May 10 09:55:06.760417 kubelet[2671]: E0510 09:55:06.760080 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:06.770184 kubelet[2671]: I0510 09:55:06.769318 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-xtables-lock\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770184 kubelet[2671]: I0510 09:55:06.769371 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-var-run-calico\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770184 kubelet[2671]: I0510 09:55:06.769394 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-var-lib-calico\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770184 kubelet[2671]: I0510 09:55:06.769416 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-cni-log-dir\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770184 kubelet[2671]: I0510 09:55:06.769438 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9j9\" (UniqueName: \"kubernetes.io/projected/a9d17786-9f3c-49a7-8f15-46c09211ae10-kube-api-access-ml9j9\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770443 kubelet[2671]: I0510 09:55:06.769462 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-cni-bin-dir\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770443 kubelet[2671]: I0510 09:55:06.769483 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d17786-9f3c-49a7-8f15-46c09211ae10-tigera-ca-bundle\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770443 kubelet[2671]: I0510 09:55:06.769518 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-lib-modules\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770443 kubelet[2671]: I0510 09:55:06.769767 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-policysync\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770443 kubelet[2671]: I0510 09:55:06.769839 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a9d17786-9f3c-49a7-8f15-46c09211ae10-node-certs\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770569 kubelet[2671]: I0510 09:55:06.769860 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-cni-net-dir\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.770569 kubelet[2671]: I0510 09:55:06.769876 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a9d17786-9f3c-49a7-8f15-46c09211ae10-flexvol-driver-host\") pod \"calico-node-8rrjp\" (UID: \"a9d17786-9f3c-49a7-8f15-46c09211ae10\") " pod="calico-system/calico-node-8rrjp" May 10 09:55:06.870548 kubelet[2671]: I0510 09:55:06.870384 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d08ba91f-1a46-45d6-a939-9e2d9f0457e9-socket-dir\") pod \"csi-node-driver-82ncp\" (UID: \"d08ba91f-1a46-45d6-a939-9e2d9f0457e9\") " pod="calico-system/csi-node-driver-82ncp" May 10 09:55:06.870548 kubelet[2671]: I0510 09:55:06.870435 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d08ba91f-1a46-45d6-a939-9e2d9f0457e9-registration-dir\") pod \"csi-node-driver-82ncp\" (UID: \"d08ba91f-1a46-45d6-a939-9e2d9f0457e9\") " pod="calico-system/csi-node-driver-82ncp" May 10 09:55:06.870548 kubelet[2671]: I0510 09:55:06.870455 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kdl\" (UniqueName: \"kubernetes.io/projected/d08ba91f-1a46-45d6-a939-9e2d9f0457e9-kube-api-access-k4kdl\") pod \"csi-node-driver-82ncp\" (UID: \"d08ba91f-1a46-45d6-a939-9e2d9f0457e9\") " pod="calico-system/csi-node-driver-82ncp" May 10 09:55:06.870548 kubelet[2671]: I0510 09:55:06.870549 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d08ba91f-1a46-45d6-a939-9e2d9f0457e9-kubelet-dir\") pod \"csi-node-driver-82ncp\" (UID: \"d08ba91f-1a46-45d6-a939-9e2d9f0457e9\") " pod="calico-system/csi-node-driver-82ncp" May 10 09:55:06.870798 kubelet[2671]: I0510 09:55:06.870575 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d08ba91f-1a46-45d6-a939-9e2d9f0457e9-varrun\") pod \"csi-node-driver-82ncp\" (UID: \"d08ba91f-1a46-45d6-a939-9e2d9f0457e9\") " pod="calico-system/csi-node-driver-82ncp" May 10 09:55:06.872046 kubelet[2671]: E0510 09:55:06.872006 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.872046 kubelet[2671]: W0510 09:55:06.872036 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.872120 kubelet[2671]: E0510 09:55:06.872062 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.872428 kubelet[2671]: E0510 09:55:06.872405 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.872548 kubelet[2671]: W0510 09:55:06.872483 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.872548 kubelet[2671]: E0510 09:55:06.872536 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.872839 kubelet[2671]: E0510 09:55:06.872786 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.872839 kubelet[2671]: W0510 09:55:06.872794 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.872839 kubelet[2671]: E0510 09:55:06.872817 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.873679 kubelet[2671]: E0510 09:55:06.873083 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.873679 kubelet[2671]: W0510 09:55:06.873098 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.873679 kubelet[2671]: E0510 09:55:06.873107 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.873679 kubelet[2671]: E0510 09:55:06.873667 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.873679 kubelet[2671]: W0510 09:55:06.873679 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.873836 kubelet[2671]: E0510 09:55:06.873698 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.874007 kubelet[2671]: E0510 09:55:06.873977 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.874007 kubelet[2671]: W0510 09:55:06.874001 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.874084 kubelet[2671]: E0510 09:55:06.874020 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.874968 kubelet[2671]: E0510 09:55:06.874278 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.874968 kubelet[2671]: W0510 09:55:06.874294 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.874968 kubelet[2671]: E0510 09:55:06.874398 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.876235 kubelet[2671]: E0510 09:55:06.876164 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.876235 kubelet[2671]: W0510 09:55:06.876184 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.876429 kubelet[2671]: E0510 09:55:06.876264 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878329 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.879148 kubelet[2671]: W0510 09:55:06.878344 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878435 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878611 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.879148 kubelet[2671]: W0510 09:55:06.878622 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878694 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878850 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.879148 kubelet[2671]: W0510 09:55:06.878860 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.879148 kubelet[2671]: E0510 09:55:06.878948 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.881710 kubelet[2671]: E0510 09:55:06.881353 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.881710 kubelet[2671]: W0510 09:55:06.881435 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.881911 kubelet[2671]: E0510 09:55:06.881816 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.882186 kubelet[2671]: E0510 09:55:06.882020 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.882186 kubelet[2671]: W0510 09:55:06.882032 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.882339 kubelet[2671]: E0510 09:55:06.882296 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.882460 kubelet[2671]: E0510 09:55:06.882322 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.882672 kubelet[2671]: W0510 09:55:06.882586 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.882881 kubelet[2671]: E0510 09:55:06.882781 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.883128 kubelet[2671]: E0510 09:55:06.882990 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.883128 kubelet[2671]: W0510 09:55:06.883002 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.883128 kubelet[2671]: E0510 09:55:06.883037 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.883329 kubelet[2671]: E0510 09:55:06.883304 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.883329 kubelet[2671]: W0510 09:55:06.883318 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.883468 kubelet[2671]: E0510 09:55:06.883384 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.885655 kubelet[2671]: E0510 09:55:06.885624 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.885655 kubelet[2671]: W0510 09:55:06.885639 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.885655 kubelet[2671]: E0510 09:55:06.885656 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.886242 kubelet[2671]: E0510 09:55:06.885919 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.886242 kubelet[2671]: W0510 09:55:06.885934 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.886242 kubelet[2671]: E0510 09:55:06.885981 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.886354 kubelet[2671]: E0510 09:55:06.886251 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.886354 kubelet[2671]: W0510 09:55:06.886262 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.886354 kubelet[2671]: E0510 09:55:06.886283 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.886565 kubelet[2671]: E0510 09:55:06.886545 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.886565 kubelet[2671]: W0510 09:55:06.886559 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.886649 kubelet[2671]: E0510 09:55:06.886575 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.886844 kubelet[2671]: E0510 09:55:06.886817 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.886844 kubelet[2671]: W0510 09:55:06.886832 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.886844 kubelet[2671]: E0510 09:55:06.886843 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.911903 containerd[1557]: time="2025-05-10T09:55:06.911862664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d557f47b6-lbjkm,Uid:9b11f50d-6f7d-429f-93c4-16a484faf094,Namespace:calico-system,Attempt:0,}" May 10 09:55:06.934225 containerd[1557]: time="2025-05-10T09:55:06.934095106Z" level=info msg="connecting to shim 355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9" address="unix:///run/containerd/s/27b762f9f505800dfdf87d560dd47caf41a0aef12da00609c34f3126cf35df64" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:06.957282 containerd[1557]: time="2025-05-10T09:55:06.957246001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rrjp,Uid:a9d17786-9f3c-49a7-8f15-46c09211ae10,Namespace:calico-system,Attempt:0,}" May 10 09:55:06.967323 systemd[1]: Started cri-containerd-355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9.scope - libcontainer container 355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9. May 10 09:55:06.971546 kubelet[2671]: E0510 09:55:06.971523 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.972575 kubelet[2671]: W0510 09:55:06.971636 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.972575 kubelet[2671]: E0510 09:55:06.972276 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.973040 kubelet[2671]: E0510 09:55:06.972883 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.973040 kubelet[2671]: W0510 09:55:06.972899 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.973040 kubelet[2671]: E0510 09:55:06.972931 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.973385 kubelet[2671]: E0510 09:55:06.973371 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.973528 kubelet[2671]: W0510 09:55:06.973441 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.973528 kubelet[2671]: E0510 09:55:06.973462 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.973804 kubelet[2671]: E0510 09:55:06.973784 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.973804 kubelet[2671]: W0510 09:55:06.973800 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.973876 kubelet[2671]: E0510 09:55:06.973826 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.974234 kubelet[2671]: E0510 09:55:06.974165 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.974234 kubelet[2671]: W0510 09:55:06.974179 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.974234 kubelet[2671]: E0510 09:55:06.974198 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.974750 kubelet[2671]: E0510 09:55:06.974622 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.974750 kubelet[2671]: W0510 09:55:06.974646 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.974750 kubelet[2671]: E0510 09:55:06.974710 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.974995 kubelet[2671]: E0510 09:55:06.974948 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.974995 kubelet[2671]: W0510 09:55:06.974959 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.975277 kubelet[2671]: E0510 09:55:06.975156 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.975423 kubelet[2671]: E0510 09:55:06.975410 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.975517 kubelet[2671]: W0510 09:55:06.975498 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.975675 kubelet[2671]: E0510 09:55:06.975648 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.976054 kubelet[2671]: E0510 09:55:06.976005 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.976164 kubelet[2671]: W0510 09:55:06.976109 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.976432 kubelet[2671]: E0510 09:55:06.976391 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.976638 kubelet[2671]: E0510 09:55:06.976622 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.976770 kubelet[2671]: W0510 09:55:06.976724 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.976903 kubelet[2671]: E0510 09:55:06.976867 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.977459 kubelet[2671]: E0510 09:55:06.977398 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.977459 kubelet[2671]: W0510 09:55:06.977411 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.977459 kubelet[2671]: E0510 09:55:06.977446 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.978174 kubelet[2671]: E0510 09:55:06.978023 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.978174 kubelet[2671]: W0510 09:55:06.978036 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.978174 kubelet[2671]: E0510 09:55:06.978094 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.978756 kubelet[2671]: E0510 09:55:06.978734 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.978825 kubelet[2671]: W0510 09:55:06.978813 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.979099 kubelet[2671]: E0510 09:55:06.979047 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.979411 kubelet[2671]: E0510 09:55:06.979210 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.979411 kubelet[2671]: W0510 09:55:06.979220 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.979411 kubelet[2671]: E0510 09:55:06.979275 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.979726 kubelet[2671]: E0510 09:55:06.979698 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.979726 kubelet[2671]: W0510 09:55:06.979709 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.980000 kubelet[2671]: E0510 09:55:06.979900 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.980397 kubelet[2671]: E0510 09:55:06.980234 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.980397 kubelet[2671]: W0510 09:55:06.980250 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.980807 kubelet[2671]: E0510 09:55:06.980538 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.980807 kubelet[2671]: E0510 09:55:06.980766 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.980807 kubelet[2671]: W0510 09:55:06.980776 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.981051 kubelet[2671]: E0510 09:55:06.980985 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.982345 kubelet[2671]: E0510 09:55:06.981638 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.982345 kubelet[2671]: W0510 09:55:06.981650 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.982407 containerd[1557]: time="2025-05-10T09:55:06.982155602Z" level=info msg="connecting to shim 6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422" address="unix:///run/containerd/s/e551fdcce16118e60663994a0db3d2b5524e9d0b3fcf73123ccb56ae3bf40ec8" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:06.982835 kubelet[2671]: E0510 09:55:06.982648 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.982835 kubelet[2671]: E0510 09:55:06.982782 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.982835 kubelet[2671]: W0510 09:55:06.982790 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.983056 kubelet[2671]: E0510 09:55:06.983041 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.983583 kubelet[2671]: E0510 09:55:06.983467 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.983583 kubelet[2671]: W0510 09:55:06.983480 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.983682 kubelet[2671]: E0510 09:55:06.983666 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.984992 kubelet[2671]: E0510 09:55:06.984960 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.984992 kubelet[2671]: W0510 09:55:06.984987 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.985169 kubelet[2671]: E0510 09:55:06.985106 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.985323 kubelet[2671]: E0510 09:55:06.985299 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.985323 kubelet[2671]: W0510 09:55:06.985315 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.985433 kubelet[2671]: E0510 09:55:06.985408 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.985635 kubelet[2671]: E0510 09:55:06.985592 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.985733 kubelet[2671]: W0510 09:55:06.985632 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.986184 kubelet[2671]: E0510 09:55:06.986001 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.986184 kubelet[2671]: W0510 09:55:06.986021 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.986184 kubelet[2671]: E0510 09:55:06.986042 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.986184 kubelet[2671]: E0510 09:55:06.986101 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.986601 kubelet[2671]: E0510 09:55:06.986538 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.986601 kubelet[2671]: W0510 09:55:06.986560 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.986601 kubelet[2671]: E0510 09:55:06.986573 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:06.996407 kubelet[2671]: E0510 09:55:06.996377 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:06.996407 kubelet[2671]: W0510 09:55:06.996402 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:06.996571 kubelet[2671]: E0510 09:55:06.996421 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:07.008322 systemd[1]: Started cri-containerd-6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422.scope - libcontainer container 6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422. May 10 09:55:07.047920 containerd[1557]: time="2025-05-10T09:55:07.047860648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d557f47b6-lbjkm,Uid:9b11f50d-6f7d-429f-93c4-16a484faf094,Namespace:calico-system,Attempt:0,} returns sandbox id \"355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9\"" May 10 09:55:07.049559 containerd[1557]: time="2025-05-10T09:55:07.049514874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 09:55:07.050014 containerd[1557]: time="2025-05-10T09:55:07.049979776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rrjp,Uid:a9d17786-9f3c-49a7-8f15-46c09211ae10,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\"" May 10 09:55:08.459793 kubelet[2671]: E0510 09:55:08.459729 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:09.693914 containerd[1557]: time="2025-05-10T09:55:09.693862094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:09.694609 containerd[1557]: time="2025-05-10T09:55:09.694584823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 10 09:55:09.695660 containerd[1557]: time="2025-05-10T09:55:09.695621728Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:09.697593 containerd[1557]: time="2025-05-10T09:55:09.697556183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:09.698088 containerd[1557]: time="2025-05-10T09:55:09.698060699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.648501059s" May 10 09:55:09.698163 containerd[1557]: time="2025-05-10T09:55:09.698087198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 10 09:55:09.699168 containerd[1557]: time="2025-05-10T09:55:09.698932149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 09:55:09.707201 containerd[1557]: time="2025-05-10T09:55:09.707156080Z" level=info msg="CreateContainer within sandbox \"355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 09:55:09.716742 containerd[1557]: time="2025-05-10T09:55:09.716700060Z" level=info msg="Container df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:09.728547 containerd[1557]: time="2025-05-10T09:55:09.728471901Z" level=info msg="CreateContainer within sandbox \"355915dbc20f948ccba86aff05ee7366e143f7133aaec384312b0861fd1896b9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2\"" May 10 09:55:09.729151 containerd[1557]: time="2025-05-10T09:55:09.729102154Z" level=info msg="StartContainer for \"df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2\"" May 10 09:55:09.730442 containerd[1557]: time="2025-05-10T09:55:09.730396928Z" level=info msg="connecting to shim df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2" address="unix:///run/containerd/s/27b762f9f505800dfdf87d560dd47caf41a0aef12da00609c34f3126cf35df64" protocol=ttrpc version=3 May 10 09:55:09.754289 systemd[1]: Started cri-containerd-df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2.scope - libcontainer container df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2. May 10 09:55:09.823050 containerd[1557]: time="2025-05-10T09:55:09.822989429Z" level=info msg="StartContainer for \"df1800e31097fa228b7cac02ec330278541e606d9f77391387cd6e123bf293c2\" returns successfully" May 10 09:55:10.459718 kubelet[2671]: E0510 09:55:10.459647 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:10.562489 kubelet[2671]: E0510 09:55:10.562424 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.562489 kubelet[2671]: W0510 09:55:10.562458 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.562489 kubelet[2671]: E0510 09:55:10.562498 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.562767 kubelet[2671]: E0510 09:55:10.562741 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.562767 kubelet[2671]: W0510 09:55:10.562754 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.562767 kubelet[2671]: E0510 09:55:10.562765 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.562992 kubelet[2671]: E0510 09:55:10.562974 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.562992 kubelet[2671]: W0510 09:55:10.562984 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.562992 kubelet[2671]: E0510 09:55:10.562993 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.563401 kubelet[2671]: E0510 09:55:10.563373 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.563401 kubelet[2671]: W0510 09:55:10.563389 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.563401 kubelet[2671]: E0510 09:55:10.563401 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.563637 kubelet[2671]: E0510 09:55:10.563614 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.563637 kubelet[2671]: W0510 09:55:10.563624 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.563637 kubelet[2671]: E0510 09:55:10.563635 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.563873 kubelet[2671]: E0510 09:55:10.563848 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.563873 kubelet[2671]: W0510 09:55:10.563858 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.563873 kubelet[2671]: E0510 09:55:10.563866 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.564125 kubelet[2671]: E0510 09:55:10.564087 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.564125 kubelet[2671]: W0510 09:55:10.564098 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.564125 kubelet[2671]: E0510 09:55:10.564106 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.564579 kubelet[2671]: E0510 09:55:10.564560 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.564579 kubelet[2671]: W0510 09:55:10.564575 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.564639 kubelet[2671]: E0510 09:55:10.564586 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.564787 kubelet[2671]: E0510 09:55:10.564771 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.564787 kubelet[2671]: W0510 09:55:10.564783 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.564839 kubelet[2671]: E0510 09:55:10.564791 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.564970 kubelet[2671]: E0510 09:55:10.564950 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.564970 kubelet[2671]: W0510 09:55:10.564961 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.564970 kubelet[2671]: E0510 09:55:10.564969 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.565206 kubelet[2671]: E0510 09:55:10.565191 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.565206 kubelet[2671]: W0510 09:55:10.565202 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.565264 kubelet[2671]: E0510 09:55:10.565212 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.565399 kubelet[2671]: E0510 09:55:10.565383 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.565399 kubelet[2671]: W0510 09:55:10.565394 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.565399 kubelet[2671]: E0510 09:55:10.565401 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.565607 kubelet[2671]: E0510 09:55:10.565592 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.565607 kubelet[2671]: W0510 09:55:10.565603 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.565675 kubelet[2671]: E0510 09:55:10.565612 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.565806 kubelet[2671]: E0510 09:55:10.565787 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.565806 kubelet[2671]: W0510 09:55:10.565798 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.565806 kubelet[2671]: E0510 09:55:10.565806 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.566014 kubelet[2671]: E0510 09:55:10.565995 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.566014 kubelet[2671]: W0510 09:55:10.566007 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.566014 kubelet[2671]: E0510 09:55:10.566015 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.601814 kubelet[2671]: E0510 09:55:10.601755 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.601814 kubelet[2671]: W0510 09:55:10.601787 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.601814 kubelet[2671]: E0510 09:55:10.601812 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.602210 kubelet[2671]: E0510 09:55:10.602177 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.602210 kubelet[2671]: W0510 09:55:10.602192 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.602210 kubelet[2671]: E0510 09:55:10.602207 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.602670 kubelet[2671]: E0510 09:55:10.602627 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.602670 kubelet[2671]: W0510 09:55:10.602662 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.602759 kubelet[2671]: E0510 09:55:10.602701 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.603013 kubelet[2671]: E0510 09:55:10.602991 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.603013 kubelet[2671]: W0510 09:55:10.603008 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.603093 kubelet[2671]: E0510 09:55:10.603029 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.603336 kubelet[2671]: E0510 09:55:10.603314 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.603336 kubelet[2671]: W0510 09:55:10.603330 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.603416 kubelet[2671]: E0510 09:55:10.603350 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.603639 kubelet[2671]: E0510 09:55:10.603620 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.603639 kubelet[2671]: W0510 09:55:10.603636 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.603711 kubelet[2671]: E0510 09:55:10.603654 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.603940 kubelet[2671]: E0510 09:55:10.603919 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.603940 kubelet[2671]: W0510 09:55:10.603935 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.604004 kubelet[2671]: E0510 09:55:10.603954 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.604226 kubelet[2671]: E0510 09:55:10.604209 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.604226 kubelet[2671]: W0510 09:55:10.604222 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.604411 kubelet[2671]: E0510 09:55:10.604236 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.604617 kubelet[2671]: E0510 09:55:10.604594 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.604617 kubelet[2671]: W0510 09:55:10.604610 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.604687 kubelet[2671]: E0510 09:55:10.604630 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.604959 kubelet[2671]: E0510 09:55:10.604930 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.604959 kubelet[2671]: W0510 09:55:10.604952 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.605113 kubelet[2671]: E0510 09:55:10.604973 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.605272 kubelet[2671]: E0510 09:55:10.605249 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.605272 kubelet[2671]: W0510 09:55:10.605268 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.605359 kubelet[2671]: E0510 09:55:10.605290 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.605564 kubelet[2671]: E0510 09:55:10.605542 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.605564 kubelet[2671]: W0510 09:55:10.605559 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.605652 kubelet[2671]: E0510 09:55:10.605580 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.605801 kubelet[2671]: E0510 09:55:10.605784 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.605801 kubelet[2671]: W0510 09:55:10.605797 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.605886 kubelet[2671]: E0510 09:55:10.605814 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.606107 kubelet[2671]: E0510 09:55:10.606084 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.606107 kubelet[2671]: W0510 09:55:10.606100 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.606218 kubelet[2671]: E0510 09:55:10.606120 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.606399 kubelet[2671]: E0510 09:55:10.606363 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.606399 kubelet[2671]: W0510 09:55:10.606377 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.606475 kubelet[2671]: E0510 09:55:10.606403 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.606763 kubelet[2671]: E0510 09:55:10.606740 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.606763 kubelet[2671]: W0510 09:55:10.606760 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.606864 kubelet[2671]: E0510 09:55:10.606780 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.607166 kubelet[2671]: E0510 09:55:10.607115 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.607166 kubelet[2671]: W0510 09:55:10.607130 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.607263 kubelet[2671]: E0510 09:55:10.607170 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:10.607505 kubelet[2671]: E0510 09:55:10.607470 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:10.607505 kubelet[2671]: W0510 09:55:10.607496 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:10.607505 kubelet[2671]: E0510 09:55:10.607509 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.519387 kubelet[2671]: I0510 09:55:11.519334 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:11.573782 kubelet[2671]: E0510 09:55:11.573717 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.573782 kubelet[2671]: W0510 09:55:11.573749 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.573782 kubelet[2671]: E0510 09:55:11.573777 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.574159 kubelet[2671]: E0510 09:55:11.574080 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.574159 kubelet[2671]: W0510 09:55:11.574124 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.574159 kubelet[2671]: E0510 09:55:11.574159 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.574550 kubelet[2671]: E0510 09:55:11.574498 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.574550 kubelet[2671]: W0510 09:55:11.574513 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.574659 kubelet[2671]: E0510 09:55:11.574569 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.574856 kubelet[2671]: E0510 09:55:11.574823 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.575023 kubelet[2671]: W0510 09:55:11.574858 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.575023 kubelet[2671]: E0510 09:55:11.574915 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.575215 kubelet[2671]: E0510 09:55:11.575194 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.575215 kubelet[2671]: W0510 09:55:11.575208 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.575320 kubelet[2671]: E0510 09:55:11.575219 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.575490 kubelet[2671]: E0510 09:55:11.575445 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.575490 kubelet[2671]: W0510 09:55:11.575459 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.575490 kubelet[2671]: E0510 09:55:11.575481 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.575913 kubelet[2671]: E0510 09:55:11.575892 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.575913 kubelet[2671]: W0510 09:55:11.575906 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.576010 kubelet[2671]: E0510 09:55:11.575918 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.576194 kubelet[2671]: E0510 09:55:11.576176 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.576194 kubelet[2671]: W0510 09:55:11.576189 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.576286 kubelet[2671]: E0510 09:55:11.576201 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.576498 kubelet[2671]: E0510 09:55:11.576479 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.576498 kubelet[2671]: W0510 09:55:11.576493 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.576589 kubelet[2671]: E0510 09:55:11.576504 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.576787 kubelet[2671]: E0510 09:55:11.576769 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.576787 kubelet[2671]: W0510 09:55:11.576783 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.576869 kubelet[2671]: E0510 09:55:11.576794 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.577054 kubelet[2671]: E0510 09:55:11.577036 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.577054 kubelet[2671]: W0510 09:55:11.577049 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.577157 kubelet[2671]: E0510 09:55:11.577088 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.577577 kubelet[2671]: E0510 09:55:11.577529 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.577577 kubelet[2671]: W0510 09:55:11.577547 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.577577 kubelet[2671]: E0510 09:55:11.577560 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.577939 kubelet[2671]: E0510 09:55:11.577921 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.577939 kubelet[2671]: W0510 09:55:11.577936 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.578115 kubelet[2671]: E0510 09:55:11.577948 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.578273 kubelet[2671]: E0510 09:55:11.578247 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.578273 kubelet[2671]: W0510 09:55:11.578264 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.578359 kubelet[2671]: E0510 09:55:11.578280 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.578552 kubelet[2671]: E0510 09:55:11.578535 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.578552 kubelet[2671]: W0510 09:55:11.578548 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.578634 kubelet[2671]: E0510 09:55:11.578561 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.610239 kubelet[2671]: E0510 09:55:11.610192 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.610239 kubelet[2671]: W0510 09:55:11.610223 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.610239 kubelet[2671]: E0510 09:55:11.610249 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.610647 kubelet[2671]: E0510 09:55:11.610603 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.610647 kubelet[2671]: W0510 09:55:11.610636 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.610814 kubelet[2671]: E0510 09:55:11.610678 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.611132 kubelet[2671]: E0510 09:55:11.611109 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.611132 kubelet[2671]: W0510 09:55:11.611125 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.611224 kubelet[2671]: E0510 09:55:11.611162 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.611491 kubelet[2671]: E0510 09:55:11.611455 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.611491 kubelet[2671]: W0510 09:55:11.611478 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.611559 kubelet[2671]: E0510 09:55:11.611495 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.611784 kubelet[2671]: E0510 09:55:11.611765 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.611784 kubelet[2671]: W0510 09:55:11.611780 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.611940 kubelet[2671]: E0510 09:55:11.611896 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.612115 kubelet[2671]: E0510 09:55:11.612000 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.612115 kubelet[2671]: W0510 09:55:11.612008 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.612115 kubelet[2671]: E0510 09:55:11.612092 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.612317 kubelet[2671]: E0510 09:55:11.612293 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.612317 kubelet[2671]: W0510 09:55:11.612308 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.612412 kubelet[2671]: E0510 09:55:11.612364 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.612592 kubelet[2671]: E0510 09:55:11.612571 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.612592 kubelet[2671]: W0510 09:55:11.612585 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.612656 kubelet[2671]: E0510 09:55:11.612598 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.612914 kubelet[2671]: E0510 09:55:11.612830 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.612914 kubelet[2671]: W0510 09:55:11.612845 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.612914 kubelet[2671]: E0510 09:55:11.612865 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.613418 kubelet[2671]: E0510 09:55:11.613249 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.613418 kubelet[2671]: W0510 09:55:11.613265 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.613418 kubelet[2671]: E0510 09:55:11.613278 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.613518 kubelet[2671]: E0510 09:55:11.613458 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.613518 kubelet[2671]: W0510 09:55:11.613466 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.613518 kubelet[2671]: E0510 09:55:11.613483 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.613692 kubelet[2671]: E0510 09:55:11.613671 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.613692 kubelet[2671]: W0510 09:55:11.613687 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.613873 kubelet[2671]: E0510 09:55:11.613767 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.614208 kubelet[2671]: E0510 09:55:11.614187 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.614208 kubelet[2671]: W0510 09:55:11.614204 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.614437 kubelet[2671]: E0510 09:55:11.614310 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.614437 kubelet[2671]: E0510 09:55:11.614424 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.614437 kubelet[2671]: W0510 09:55:11.614431 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.614570 kubelet[2671]: E0510 09:55:11.614483 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.614700 kubelet[2671]: E0510 09:55:11.614671 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.614700 kubelet[2671]: W0510 09:55:11.614686 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.614700 kubelet[2671]: E0510 09:55:11.614707 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.614931 kubelet[2671]: E0510 09:55:11.614917 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.614931 kubelet[2671]: W0510 09:55:11.614925 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.615033 kubelet[2671]: E0510 09:55:11.614935 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.615253 kubelet[2671]: E0510 09:55:11.615185 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.615253 kubelet[2671]: W0510 09:55:11.615199 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.615253 kubelet[2671]: E0510 09:55:11.615209 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.615663 kubelet[2671]: E0510 09:55:11.615560 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 09:55:11.615663 kubelet[2671]: W0510 09:55:11.615574 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 09:55:11.615663 kubelet[2671]: E0510 09:55:11.615583 2671 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 09:55:11.708228 containerd[1557]: time="2025-05-10T09:55:11.708127635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:11.709325 containerd[1557]: time="2025-05-10T09:55:11.709252003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 10 09:55:11.711107 containerd[1557]: time="2025-05-10T09:55:11.711070826Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:11.713899 containerd[1557]: time="2025-05-10T09:55:11.713845689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:11.714440 containerd[1557]: time="2025-05-10T09:55:11.714398945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.015435156s" May 10 09:55:11.714500 containerd[1557]: time="2025-05-10T09:55:11.714436326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 10 09:55:11.716874 containerd[1557]: time="2025-05-10T09:55:11.716840827Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 09:55:11.727792 containerd[1557]: time="2025-05-10T09:55:11.727735860Z" level=info msg="Container 581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:11.737789 containerd[1557]: time="2025-05-10T09:55:11.737726050Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\"" May 10 09:55:11.738291 containerd[1557]: time="2025-05-10T09:55:11.738245573Z" level=info msg="StartContainer for \"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\"" May 10 09:55:11.739675 containerd[1557]: time="2025-05-10T09:55:11.739644781Z" level=info msg="connecting to shim 581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c" address="unix:///run/containerd/s/e551fdcce16118e60663994a0db3d2b5524e9d0b3fcf73123ccb56ae3bf40ec8" protocol=ttrpc version=3 May 10 09:55:11.764332 systemd[1]: Started cri-containerd-581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c.scope - libcontainer container 581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c. May 10 09:55:11.817395 containerd[1557]: time="2025-05-10T09:55:11.817173880Z" level=info msg="StartContainer for \"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\" returns successfully" May 10 09:55:11.829629 systemd[1]: cri-containerd-581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c.scope: Deactivated successfully. May 10 09:55:11.832173 containerd[1557]: time="2025-05-10T09:55:11.831568227Z" level=info msg="received exit event container_id:\"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\" id:\"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\" pid:3347 exited_at:{seconds:1746870911 nanos:830742935}" May 10 09:55:11.832173 containerd[1557]: time="2025-05-10T09:55:11.831701590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\" id:\"581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c\" pid:3347 exited_at:{seconds:1746870911 nanos:830742935}" May 10 09:55:11.861074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-581ebe680e10612a51157d1842befbddabf3fbdea76dbe2ac5520f347428fe9c-rootfs.mount: Deactivated successfully. May 10 09:55:12.460399 kubelet[2671]: E0510 09:55:12.460338 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:12.525055 containerd[1557]: time="2025-05-10T09:55:12.524986604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 09:55:12.543086 kubelet[2671]: I0510 09:55:12.542494 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d557f47b6-lbjkm" podStartSLOduration=3.892908066 podStartE2EDuration="6.542474394s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:07.049214795 +0000 UTC m=+15.672818581" lastFinishedPulling="2025-05-10 09:55:09.698781123 +0000 UTC m=+18.322384909" observedRunningTime="2025-05-10 09:55:10.540825412 +0000 UTC m=+19.164429198" watchObservedRunningTime="2025-05-10 09:55:12.542474394 +0000 UTC m=+21.166078180" May 10 09:55:14.460132 kubelet[2671]: E0510 09:55:14.460054 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:16.460353 kubelet[2671]: E0510 09:55:16.460239 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:17.990866 containerd[1557]: time="2025-05-10T09:55:17.990787150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:17.997182 containerd[1557]: time="2025-05-10T09:55:17.997097450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 10 09:55:17.998813 containerd[1557]: time="2025-05-10T09:55:17.998760121Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:18.001101 containerd[1557]: time="2025-05-10T09:55:18.001052131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:18.001818 containerd[1557]: time="2025-05-10T09:55:18.001789323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.476756171s" May 10 09:55:18.001818 containerd[1557]: time="2025-05-10T09:55:18.001819179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 10 09:55:18.018103 containerd[1557]: time="2025-05-10T09:55:18.018048630Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 09:55:18.029733 containerd[1557]: time="2025-05-10T09:55:18.029670257Z" level=info msg="Container 2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:18.044098 containerd[1557]: time="2025-05-10T09:55:18.044038011Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\"" May 10 09:55:18.047330 containerd[1557]: time="2025-05-10T09:55:18.047251351Z" level=info msg="StartContainer for \"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\"" May 10 09:55:18.051218 containerd[1557]: time="2025-05-10T09:55:18.051083998Z" level=info msg="connecting to shim 2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4" address="unix:///run/containerd/s/e551fdcce16118e60663994a0db3d2b5524e9d0b3fcf73123ccb56ae3bf40ec8" protocol=ttrpc version=3 May 10 09:55:18.082352 systemd[1]: Started cri-containerd-2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4.scope - libcontainer container 2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4. May 10 09:55:18.168959 containerd[1557]: time="2025-05-10T09:55:18.168902234Z" level=info msg="StartContainer for \"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\" returns successfully" May 10 09:55:18.622791 kubelet[2671]: E0510 09:55:18.622656 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:19.631302 containerd[1557]: time="2025-05-10T09:55:19.631235720Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 09:55:19.635307 systemd[1]: cri-containerd-2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4.scope: Deactivated successfully. May 10 09:55:19.635696 systemd[1]: cri-containerd-2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4.scope: Consumed 684ms CPU time, 160.7M memory peak, 56K read from disk, 154M written to disk. May 10 09:55:19.635946 containerd[1557]: time="2025-05-10T09:55:19.635715829Z" level=info msg="received exit event container_id:\"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\" id:\"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\" pid:3406 exited_at:{seconds:1746870919 nanos:635509799}" May 10 09:55:19.635946 containerd[1557]: time="2025-05-10T09:55:19.635803534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\" id:\"2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4\" pid:3406 exited_at:{seconds:1746870919 nanos:635509799}" May 10 09:55:19.658544 kubelet[2671]: I0510 09:55:19.658502 2671 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 09:55:19.665176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c37d7bfaa776d7a8a1e680df7890e7732a294936fb463ce31aab7b25af46ec4-rootfs.mount: Deactivated successfully. May 10 09:55:19.705650 systemd[1]: Created slice kubepods-burstable-pod88cdf3c5_5b30_4a73_9732_439df695416e.slice - libcontainer container kubepods-burstable-pod88cdf3c5_5b30_4a73_9732_439df695416e.slice. May 10 09:55:19.713475 systemd[1]: Created slice kubepods-besteffort-pod1c02929c_f139_4177_9944_e4b3eea9b0b0.slice - libcontainer container kubepods-besteffort-pod1c02929c_f139_4177_9944_e4b3eea9b0b0.slice. May 10 09:55:19.720267 systemd[1]: Created slice kubepods-burstable-pod19d8f44f_2085_4ef7_b080_05c9543f3869.slice - libcontainer container kubepods-burstable-pod19d8f44f_2085_4ef7_b080_05c9543f3869.slice. May 10 09:55:19.725236 systemd[1]: Created slice kubepods-besteffort-pod21335bd2_c0e3_4861_ab59_7e861cd41c21.slice - libcontainer container kubepods-besteffort-pod21335bd2_c0e3_4861_ab59_7e861cd41c21.slice. May 10 09:55:19.730638 systemd[1]: Created slice kubepods-besteffort-pod15572e0d_4f88_4d8f_84b3_bdf66f3f1702.slice - libcontainer container kubepods-besteffort-pod15572e0d_4f88_4d8f_84b3_bdf66f3f1702.slice. May 10 09:55:19.763869 kubelet[2671]: I0510 09:55:19.763799 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88cdf3c5-5b30-4a73-9732-439df695416e-config-volume\") pod \"coredns-6f6b679f8f-rcnv8\" (UID: \"88cdf3c5-5b30-4a73-9732-439df695416e\") " pod="kube-system/coredns-6f6b679f8f-rcnv8" May 10 09:55:19.763869 kubelet[2671]: I0510 09:55:19.763859 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckzd\" (UniqueName: \"kubernetes.io/projected/19d8f44f-2085-4ef7-b080-05c9543f3869-kube-api-access-5ckzd\") pod \"coredns-6f6b679f8f-9w4bk\" (UID: \"19d8f44f-2085-4ef7-b080-05c9543f3869\") " pod="kube-system/coredns-6f6b679f8f-9w4bk" May 10 09:55:19.764122 kubelet[2671]: I0510 09:55:19.763889 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4w2c\" (UniqueName: \"kubernetes.io/projected/21335bd2-c0e3-4861-ab59-7e861cd41c21-kube-api-access-z4w2c\") pod \"calico-apiserver-949b84484-t574n\" (UID: \"21335bd2-c0e3-4861-ab59-7e861cd41c21\") " pod="calico-apiserver/calico-apiserver-949b84484-t574n" May 10 09:55:19.764122 kubelet[2671]: I0510 09:55:19.763953 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjj6\" (UniqueName: \"kubernetes.io/projected/1c02929c-f139-4177-9944-e4b3eea9b0b0-kube-api-access-vdjj6\") pod \"calico-kube-controllers-6fbf9d7d57-x7rpn\" (UID: \"1c02929c-f139-4177-9944-e4b3eea9b0b0\") " pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" May 10 09:55:19.764122 kubelet[2671]: I0510 09:55:19.764006 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c02929c-f139-4177-9944-e4b3eea9b0b0-tigera-ca-bundle\") pod \"calico-kube-controllers-6fbf9d7d57-x7rpn\" (UID: \"1c02929c-f139-4177-9944-e4b3eea9b0b0\") " pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" May 10 09:55:19.764122 kubelet[2671]: I0510 09:55:19.764022 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19d8f44f-2085-4ef7-b080-05c9543f3869-config-volume\") pod \"coredns-6f6b679f8f-9w4bk\" (UID: \"19d8f44f-2085-4ef7-b080-05c9543f3869\") " pod="kube-system/coredns-6f6b679f8f-9w4bk" May 10 09:55:19.764122 kubelet[2671]: I0510 09:55:19.764038 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxmn\" (UniqueName: \"kubernetes.io/projected/88cdf3c5-5b30-4a73-9732-439df695416e-kube-api-access-2pxmn\") pod \"coredns-6f6b679f8f-rcnv8\" (UID: \"88cdf3c5-5b30-4a73-9732-439df695416e\") " pod="kube-system/coredns-6f6b679f8f-rcnv8" May 10 09:55:19.764282 kubelet[2671]: I0510 09:55:19.764055 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15572e0d-4f88-4d8f-84b3-bdf66f3f1702-calico-apiserver-certs\") pod \"calico-apiserver-949b84484-dd82b\" (UID: \"15572e0d-4f88-4d8f-84b3-bdf66f3f1702\") " pod="calico-apiserver/calico-apiserver-949b84484-dd82b" May 10 09:55:19.764282 kubelet[2671]: I0510 09:55:19.764071 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/21335bd2-c0e3-4861-ab59-7e861cd41c21-calico-apiserver-certs\") pod \"calico-apiserver-949b84484-t574n\" (UID: \"21335bd2-c0e3-4861-ab59-7e861cd41c21\") " pod="calico-apiserver/calico-apiserver-949b84484-t574n" May 10 09:55:19.764282 kubelet[2671]: I0510 09:55:19.764090 2671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5jm\" (UniqueName: \"kubernetes.io/projected/15572e0d-4f88-4d8f-84b3-bdf66f3f1702-kube-api-access-nj5jm\") pod \"calico-apiserver-949b84484-dd82b\" (UID: \"15572e0d-4f88-4d8f-84b3-bdf66f3f1702\") " pod="calico-apiserver/calico-apiserver-949b84484-dd82b" May 10 09:55:19.878601 kubelet[2671]: I0510 09:55:19.878500 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:20.010568 containerd[1557]: time="2025-05-10T09:55:20.010390482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rcnv8,Uid:88cdf3c5-5b30-4a73-9732-439df695416e,Namespace:kube-system,Attempt:0,}" May 10 09:55:20.017438 containerd[1557]: time="2025-05-10T09:55:20.017384605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbf9d7d57-x7rpn,Uid:1c02929c-f139-4177-9944-e4b3eea9b0b0,Namespace:calico-system,Attempt:0,}" May 10 09:55:20.024606 containerd[1557]: time="2025-05-10T09:55:20.024005072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9w4bk,Uid:19d8f44f-2085-4ef7-b080-05c9543f3869,Namespace:kube-system,Attempt:0,}" May 10 09:55:20.075215 containerd[1557]: time="2025-05-10T09:55:20.075158381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-dd82b,Uid:15572e0d-4f88-4d8f-84b3-bdf66f3f1702,Namespace:calico-apiserver,Attempt:0,}" May 10 09:55:20.076296 containerd[1557]: time="2025-05-10T09:55:20.076252567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-t574n,Uid:21335bd2-c0e3-4861-ab59-7e861cd41c21,Namespace:calico-apiserver,Attempt:0,}" May 10 09:55:20.092741 containerd[1557]: time="2025-05-10T09:55:20.092673684Z" level=error msg="Failed to destroy network for sandbox \"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.094489 containerd[1557]: time="2025-05-10T09:55:20.094450999Z" level=error msg="Failed to destroy network for sandbox \"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.096039 containerd[1557]: time="2025-05-10T09:55:20.095984374Z" level=error msg="Failed to destroy network for sandbox \"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.263668 containerd[1557]: time="2025-05-10T09:55:20.263488171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9w4bk,Uid:19d8f44f-2085-4ef7-b080-05c9543f3869,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.264617 kubelet[2671]: E0510 09:55:20.264553 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.264742 kubelet[2671]: E0510 09:55:20.264659 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9w4bk" May 10 09:55:20.264742 kubelet[2671]: E0510 09:55:20.264683 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9w4bk" May 10 09:55:20.264796 kubelet[2671]: E0510 09:55:20.264733 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9w4bk_kube-system(19d8f44f-2085-4ef7-b080-05c9543f3869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9w4bk_kube-system(19d8f44f-2085-4ef7-b080-05c9543f3869)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9922e5e05c53129e10939ede4c3e6682ab4605bbbbc966679401392ed778c415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9w4bk" podUID="19d8f44f-2085-4ef7-b080-05c9543f3869" May 10 09:55:20.298635 containerd[1557]: time="2025-05-10T09:55:20.298526160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rcnv8,Uid:88cdf3c5-5b30-4a73-9732-439df695416e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.298897 kubelet[2671]: E0510 09:55:20.298840 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.298982 kubelet[2671]: E0510 09:55:20.298927 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rcnv8" May 10 09:55:20.298982 kubelet[2671]: E0510 09:55:20.298950 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rcnv8" May 10 09:55:20.299228 kubelet[2671]: E0510 09:55:20.299004 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rcnv8_kube-system(88cdf3c5-5b30-4a73-9732-439df695416e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rcnv8_kube-system(88cdf3c5-5b30-4a73-9732-439df695416e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f098bf250653929b347d9a38d7f2769af0fd0527485d587ff35599f46550613\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rcnv8" podUID="88cdf3c5-5b30-4a73-9732-439df695416e" May 10 09:55:20.320326 containerd[1557]: time="2025-05-10T09:55:20.320226372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbf9d7d57-x7rpn,Uid:1c02929c-f139-4177-9944-e4b3eea9b0b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.320543 kubelet[2671]: E0510 09:55:20.320518 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.320602 kubelet[2671]: E0510 09:55:20.320569 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" May 10 09:55:20.320602 kubelet[2671]: E0510 09:55:20.320590 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" May 10 09:55:20.320682 kubelet[2671]: E0510 09:55:20.320631 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fbf9d7d57-x7rpn_calico-system(1c02929c-f139-4177-9944-e4b3eea9b0b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fbf9d7d57-x7rpn_calico-system(1c02929c-f139-4177-9944-e4b3eea9b0b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd65cc59f1e5449cd4ffe35b9552fbfb7727a5113f7752998de0308d122b7faa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" podUID="1c02929c-f139-4177-9944-e4b3eea9b0b0" May 10 09:55:20.395161 containerd[1557]: time="2025-05-10T09:55:20.395071654Z" level=error msg="Failed to destroy network for sandbox \"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.417517 containerd[1557]: time="2025-05-10T09:55:20.417446229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-dd82b,Uid:15572e0d-4f88-4d8f-84b3-bdf66f3f1702,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.417776 kubelet[2671]: E0510 09:55:20.417728 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.417840 kubelet[2671]: E0510 09:55:20.417798 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-949b84484-dd82b" May 10 09:55:20.417840 kubelet[2671]: E0510 09:55:20.417819 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-949b84484-dd82b" May 10 09:55:20.418027 kubelet[2671]: E0510 09:55:20.417875 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-949b84484-dd82b_calico-apiserver(15572e0d-4f88-4d8f-84b3-bdf66f3f1702)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-949b84484-dd82b_calico-apiserver(15572e0d-4f88-4d8f-84b3-bdf66f3f1702)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c9c194dcc0886bc56203bfef5635359cdb81634da24b16181753e051f1bde48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-949b84484-dd82b" podUID="15572e0d-4f88-4d8f-84b3-bdf66f3f1702" May 10 09:55:20.427040 containerd[1557]: time="2025-05-10T09:55:20.426968984Z" level=error msg="Failed to destroy network for sandbox \"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.467374 systemd[1]: Created slice kubepods-besteffort-podd08ba91f_1a46_45d6_a939_9e2d9f0457e9.slice - libcontainer container kubepods-besteffort-podd08ba91f_1a46_45d6_a939_9e2d9f0457e9.slice. May 10 09:55:20.470313 containerd[1557]: time="2025-05-10T09:55:20.470256446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-t574n,Uid:21335bd2-c0e3-4861-ab59-7e861cd41c21,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.470313 containerd[1557]: time="2025-05-10T09:55:20.470308985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82ncp,Uid:d08ba91f-1a46-45d6-a939-9e2d9f0457e9,Namespace:calico-system,Attempt:0,}" May 10 09:55:20.470553 kubelet[2671]: E0510 09:55:20.470478 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.470553 kubelet[2671]: E0510 09:55:20.470563 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-949b84484-t574n" May 10 09:55:20.470677 kubelet[2671]: E0510 09:55:20.470582 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-949b84484-t574n" May 10 09:55:20.470677 kubelet[2671]: E0510 09:55:20.470619 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-949b84484-t574n_calico-apiserver(21335bd2-c0e3-4861-ab59-7e861cd41c21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-949b84484-t574n_calico-apiserver(21335bd2-c0e3-4861-ab59-7e861cd41c21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e418cb7ca941d9bbae245ca5c1b28cae2c6e5c264af740a583c941b4a9cf030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-949b84484-t574n" podUID="21335bd2-c0e3-4861-ab59-7e861cd41c21" May 10 09:55:20.582409 containerd[1557]: time="2025-05-10T09:55:20.582313689Z" level=error msg="Failed to destroy network for sandbox \"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.584423 containerd[1557]: time="2025-05-10T09:55:20.584369150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82ncp,Uid:d08ba91f-1a46-45d6-a939-9e2d9f0457e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.584766 kubelet[2671]: E0510 09:55:20.584685 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 09:55:20.584869 kubelet[2671]: E0510 09:55:20.584770 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-82ncp" May 10 09:55:20.584869 kubelet[2671]: E0510 09:55:20.584797 2671 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-82ncp" May 10 09:55:20.584950 kubelet[2671]: E0510 09:55:20.584857 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-82ncp_calico-system(d08ba91f-1a46-45d6-a939-9e2d9f0457e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-82ncp_calico-system(d08ba91f-1a46-45d6-a939-9e2d9f0457e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a8a068cedef017392bcf61614dda422cdf1903ef1a33035f9ba6f405503b88b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-82ncp" podUID="d08ba91f-1a46-45d6-a939-9e2d9f0457e9" May 10 09:55:20.636828 containerd[1557]: time="2025-05-10T09:55:20.636784511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 09:55:21.546300 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:35308.service - OpenSSH per-connection server daemon (10.0.0.1:35308). May 10 09:55:21.603023 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 35308 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:21.604644 sshd-session[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:21.609236 systemd-logind[1541]: New session 8 of user core. May 10 09:55:21.617260 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 09:55:21.738950 sshd[3665]: Connection closed by 10.0.0.1 port 35308 May 10 09:55:21.739398 sshd-session[3663]: pam_unix(sshd:session): session closed for user core May 10 09:55:21.744841 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:35308.service: Deactivated successfully. May 10 09:55:21.747126 systemd[1]: session-8.scope: Deactivated successfully. May 10 09:55:21.748088 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. May 10 09:55:21.749591 systemd-logind[1541]: Removed session 8. May 10 09:55:26.755500 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:35580.service - OpenSSH per-connection server daemon (10.0.0.1:35580). May 10 09:55:26.943169 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 35580 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:26.944865 sshd-session[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:26.952829 systemd-logind[1541]: New session 9 of user core. May 10 09:55:26.957348 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 09:55:27.099519 sshd[3686]: Connection closed by 10.0.0.1 port 35580 May 10 09:55:27.101348 sshd-session[3684]: pam_unix(sshd:session): session closed for user core May 10 09:55:27.107075 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. May 10 09:55:27.107565 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:35580.service: Deactivated successfully. May 10 09:55:27.110085 systemd[1]: session-9.scope: Deactivated successfully. May 10 09:55:27.112724 systemd-logind[1541]: Removed session 9. May 10 09:55:27.364274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161353472.mount: Deactivated successfully. May 10 09:55:27.845572 containerd[1557]: time="2025-05-10T09:55:27.845089369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 10 09:55:27.848696 containerd[1557]: time="2025-05-10T09:55:27.848644721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.211818131s" May 10 09:55:27.848696 containerd[1557]: time="2025-05-10T09:55:27.848684427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 10 09:55:27.864417 containerd[1557]: time="2025-05-10T09:55:27.864365099Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 09:55:27.872404 containerd[1557]: time="2025-05-10T09:55:27.834294668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:27.873122 containerd[1557]: time="2025-05-10T09:55:27.873082703Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:27.873662 containerd[1557]: time="2025-05-10T09:55:27.873630536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:27.877704 containerd[1557]: time="2025-05-10T09:55:27.877653841Z" level=info msg="Container ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:27.910754 containerd[1557]: time="2025-05-10T09:55:27.910687367Z" level=info msg="CreateContainer within sandbox \"6d08e02b60ae1b7f49cb21660105748b457bb6fc3d3dca7957d0097db56a1422\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\"" May 10 09:55:27.911377 containerd[1557]: time="2025-05-10T09:55:27.911320362Z" level=info msg="StartContainer for \"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\"" May 10 09:55:27.912836 containerd[1557]: time="2025-05-10T09:55:27.912810811Z" level=info msg="connecting to shim ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796" address="unix:///run/containerd/s/e551fdcce16118e60663994a0db3d2b5524e9d0b3fcf73123ccb56ae3bf40ec8" protocol=ttrpc version=3 May 10 09:55:28.000349 systemd[1]: Started cri-containerd-ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796.scope - libcontainer container ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796. May 10 09:55:28.310863 containerd[1557]: time="2025-05-10T09:55:28.310809136Z" level=info msg="StartContainer for \"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\" returns successfully" May 10 09:55:28.326951 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 09:55:28.327684 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 09:55:28.670804 kubelet[2671]: I0510 09:55:28.670614 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8rrjp" podStartSLOduration=1.872615182 podStartE2EDuration="22.670588877s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:07.051534775 +0000 UTC m=+15.675138561" lastFinishedPulling="2025-05-10 09:55:27.84950847 +0000 UTC m=+36.473112256" observedRunningTime="2025-05-10 09:55:28.669299697 +0000 UTC m=+37.292903483" watchObservedRunningTime="2025-05-10 09:55:28.670588877 +0000 UTC m=+37.294192663" May 10 09:55:28.774399 containerd[1557]: time="2025-05-10T09:55:28.774349194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\" id:\"f24ca5bb65490888af844548093e978e1860ba986fe754c878ff83f71ba80661\" pid:3775 exit_status:1 exited_at:{seconds:1746870928 nanos:774030614}" May 10 09:55:29.732172 kernel: bpftool[3923]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 09:55:29.778725 containerd[1557]: time="2025-05-10T09:55:29.778515866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\" id:\"3acb5615e727ac8871c6cd46a334cfb0d51d6e1f3bcb11d2a8269ff4369d9c88\" pid:3892 exit_status:1 exited_at:{seconds:1746870929 nanos:778021315}" May 10 09:55:29.986277 systemd-networkd[1438]: vxlan.calico: Link UP May 10 09:55:29.986292 systemd-networkd[1438]: vxlan.calico: Gained carrier May 10 09:55:30.460727 containerd[1557]: time="2025-05-10T09:55:30.460641422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbf9d7d57-x7rpn,Uid:1c02929c-f139-4177-9944-e4b3eea9b0b0,Namespace:calico-system,Attempt:0,}" May 10 09:55:30.603787 systemd-networkd[1438]: cali938968be0c3: Link UP May 10 09:55:30.604351 systemd-networkd[1438]: cali938968be0c3: Gained carrier May 10 09:55:30.617101 containerd[1557]: 2025-05-10 09:55:30.503 [INFO][4009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0 calico-kube-controllers-6fbf9d7d57- calico-system 1c02929c-f139-4177-9944-e4b3eea9b0b0 674 0 2025-05-10 09:55:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fbf9d7d57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fbf9d7d57-x7rpn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali938968be0c3 [] []}} ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-" May 10 09:55:30.617101 containerd[1557]: 2025-05-10 09:55:30.503 [INFO][4009] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617101 containerd[1557]: 2025-05-10 09:55:30.566 [INFO][4023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" HandleID="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Workload="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.575 [INFO][4023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" HandleID="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Workload="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000542af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fbf9d7d57-x7rpn", "timestamp":"2025-05-10 09:55:30.566225537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.575 [INFO][4023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.575 [INFO][4023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.575 [INFO][4023] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.577 [INFO][4023] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" host="localhost" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.582 [INFO][4023] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.585 [INFO][4023] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.587 [INFO][4023] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.588 [INFO][4023] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:30.617326 containerd[1557]: 2025-05-10 09:55:30.588 [INFO][4023] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" host="localhost" May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.590 [INFO][4023] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778 May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.593 [INFO][4023] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" host="localhost" May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.597 [INFO][4023] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" host="localhost" May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.597 [INFO][4023] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" host="localhost" May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.597 [INFO][4023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:30.617574 containerd[1557]: 2025-05-10 09:55:30.597 [INFO][4023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" HandleID="k8s-pod-network.b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Workload="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617708 containerd[1557]: 2025-05-10 09:55:30.601 [INFO][4009] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0", GenerateName:"calico-kube-controllers-6fbf9d7d57-", Namespace:"calico-system", SelfLink:"", UID:"1c02929c-f139-4177-9944-e4b3eea9b0b0", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbf9d7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fbf9d7d57-x7rpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali938968be0c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:30.617761 containerd[1557]: 2025-05-10 09:55:30.601 [INFO][4009] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617761 containerd[1557]: 2025-05-10 09:55:30.601 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali938968be0c3 ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617761 containerd[1557]: 2025-05-10 09:55:30.603 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.617829 containerd[1557]: 2025-05-10 09:55:30.604 [INFO][4009] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0", GenerateName:"calico-kube-controllers-6fbf9d7d57-", Namespace:"calico-system", SelfLink:"", UID:"1c02929c-f139-4177-9944-e4b3eea9b0b0", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbf9d7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778", Pod:"calico-kube-controllers-6fbf9d7d57-x7rpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali938968be0c3", MAC:"3a:bd:7b:37:9b:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:30.617879 containerd[1557]: 2025-05-10 09:55:30.611 [INFO][4009] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" Namespace="calico-system" Pod="calico-kube-controllers-6fbf9d7d57-x7rpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbf9d7d57--x7rpn-eth0" May 10 09:55:30.760410 containerd[1557]: time="2025-05-10T09:55:30.760254437Z" level=info msg="connecting to shim b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778" address="unix:///run/containerd/s/259d6254422fd7b8f0fdda95c58b2ee8a35e26f986185e012b0b95310cfcc1fa" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:30.822303 systemd[1]: Started cri-containerd-b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778.scope - libcontainer container b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778. May 10 09:55:30.835334 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:30.893209 containerd[1557]: time="2025-05-10T09:55:30.893127548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbf9d7d57-x7rpn,Uid:1c02929c-f139-4177-9944-e4b3eea9b0b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778\"" May 10 09:55:30.894952 containerd[1557]: time="2025-05-10T09:55:30.894930706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 09:55:31.461014 containerd[1557]: time="2025-05-10T09:55:31.460956229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-t574n,Uid:21335bd2-c0e3-4861-ab59-7e861cd41c21,Namespace:calico-apiserver,Attempt:0,}" May 10 09:55:31.562625 systemd-networkd[1438]: cali3393077c4fe: Link UP May 10 09:55:31.562990 systemd-networkd[1438]: cali3393077c4fe: Gained carrier May 10 09:55:31.574562 containerd[1557]: 2025-05-10 09:55:31.495 [INFO][4093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--949b84484--t574n-eth0 calico-apiserver-949b84484- calico-apiserver 21335bd2-c0e3-4861-ab59-7e861cd41c21 677 0 2025-05-10 09:55:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:949b84484 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-949b84484-t574n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3393077c4fe [] []}} ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-" May 10 09:55:31.574562 containerd[1557]: 2025-05-10 09:55:31.496 [INFO][4093] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.574562 containerd[1557]: 2025-05-10 09:55:31.525 [INFO][4107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" HandleID="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Workload="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.532 [INFO][4107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" HandleID="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Workload="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ebd60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-949b84484-t574n", "timestamp":"2025-05-10 09:55:31.525359715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.532 [INFO][4107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.533 [INFO][4107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.533 [INFO][4107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.534 [INFO][4107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" host="localhost" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.538 [INFO][4107] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.541 [INFO][4107] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.543 [INFO][4107] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.545 [INFO][4107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:31.574872 containerd[1557]: 2025-05-10 09:55:31.545 [INFO][4107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" host="localhost" May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.546 [INFO][4107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.551 [INFO][4107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" host="localhost" May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.556 [INFO][4107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" host="localhost" May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.556 [INFO][4107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" host="localhost" May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.556 [INFO][4107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:31.577288 containerd[1557]: 2025-05-10 09:55:31.556 [INFO][4107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" HandleID="k8s-pod-network.9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Workload="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.577476 containerd[1557]: 2025-05-10 09:55:31.559 [INFO][4093] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--949b84484--t574n-eth0", GenerateName:"calico-apiserver-949b84484-", Namespace:"calico-apiserver", SelfLink:"", UID:"21335bd2-c0e3-4861-ab59-7e861cd41c21", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"949b84484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-949b84484-t574n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3393077c4fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:31.577563 containerd[1557]: 2025-05-10 09:55:31.559 [INFO][4093] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.577563 containerd[1557]: 2025-05-10 09:55:31.559 [INFO][4093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3393077c4fe ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.577563 containerd[1557]: 2025-05-10 09:55:31.561 [INFO][4093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.577661 containerd[1557]: 2025-05-10 09:55:31.562 [INFO][4093] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--949b84484--t574n-eth0", GenerateName:"calico-apiserver-949b84484-", Namespace:"calico-apiserver", SelfLink:"", UID:"21335bd2-c0e3-4861-ab59-7e861cd41c21", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"949b84484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba", Pod:"calico-apiserver-949b84484-t574n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3393077c4fe", MAC:"a6:0b:79:99:b5:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:31.577748 containerd[1557]: 2025-05-10 09:55:31.570 [INFO][4093] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-t574n" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--t574n-eth0" May 10 09:55:31.626208 containerd[1557]: time="2025-05-10T09:55:31.626125969Z" level=info msg="connecting to shim 9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba" address="unix:///run/containerd/s/d893c67727b46940395f5e18f5dd9004d452aff8e81d1f25f5fa4499493ef70f" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:31.656268 systemd[1]: Started cri-containerd-9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba.scope - libcontainer container 9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba. May 10 09:55:31.660332 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL May 10 09:55:31.672685 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:31.704096 containerd[1557]: time="2025-05-10T09:55:31.704053618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-t574n,Uid:21335bd2-c0e3-4861-ab59-7e861cd41c21,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba\"" May 10 09:55:31.916322 systemd-networkd[1438]: cali938968be0c3: Gained IPv6LL May 10 09:55:32.116658 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:35586.service - OpenSSH per-connection server daemon (10.0.0.1:35586). May 10 09:55:32.173789 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 35586 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:32.175576 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:32.180654 systemd-logind[1541]: New session 10 of user core. May 10 09:55:32.190294 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 09:55:32.383497 sshd[4179]: Connection closed by 10.0.0.1 port 35586 May 10 09:55:32.383799 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 10 09:55:32.387952 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:35586.service: Deactivated successfully. May 10 09:55:32.390341 systemd[1]: session-10.scope: Deactivated successfully. May 10 09:55:32.391176 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. May 10 09:55:32.392252 systemd-logind[1541]: Removed session 10. May 10 09:55:32.461341 containerd[1557]: time="2025-05-10T09:55:32.461187270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-dd82b,Uid:15572e0d-4f88-4d8f-84b3-bdf66f3f1702,Namespace:calico-apiserver,Attempt:0,}" May 10 09:55:32.571121 systemd-networkd[1438]: calid4706d845ad: Link UP May 10 09:55:32.572264 systemd-networkd[1438]: calid4706d845ad: Gained carrier May 10 09:55:32.584186 containerd[1557]: 2025-05-10 09:55:32.498 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--949b84484--dd82b-eth0 calico-apiserver-949b84484- calico-apiserver 15572e0d-4f88-4d8f-84b3-bdf66f3f1702 678 0 2025-05-10 09:55:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:949b84484 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-949b84484-dd82b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4706d845ad [] []}} ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-" May 10 09:55:32.584186 containerd[1557]: 2025-05-10 09:55:32.498 [INFO][4192] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.584186 containerd[1557]: 2025-05-10 09:55:32.526 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" HandleID="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Workload="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.534 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" HandleID="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Workload="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-949b84484-dd82b", "timestamp":"2025-05-10 09:55:32.526349421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.534 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.534 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.534 [INFO][4206] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.538 [INFO][4206] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" host="localhost" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.542 [INFO][4206] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.546 [INFO][4206] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.548 [INFO][4206] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.550 [INFO][4206] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:32.584496 containerd[1557]: 2025-05-10 09:55:32.550 [INFO][4206] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" host="localhost" May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.551 [INFO][4206] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9 May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.554 [INFO][4206] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" host="localhost" May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.562 [INFO][4206] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" host="localhost" May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.562 [INFO][4206] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" host="localhost" May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.562 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:32.584842 containerd[1557]: 2025-05-10 09:55:32.562 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" HandleID="k8s-pod-network.38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Workload="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.585032 containerd[1557]: 2025-05-10 09:55:32.566 [INFO][4192] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--949b84484--dd82b-eth0", GenerateName:"calico-apiserver-949b84484-", Namespace:"calico-apiserver", SelfLink:"", UID:"15572e0d-4f88-4d8f-84b3-bdf66f3f1702", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"949b84484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-949b84484-dd82b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4706d845ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:32.585108 containerd[1557]: 2025-05-10 09:55:32.567 [INFO][4192] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.585108 containerd[1557]: 2025-05-10 09:55:32.567 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4706d845ad ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.585108 containerd[1557]: 2025-05-10 09:55:32.572 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.585304 containerd[1557]: 2025-05-10 09:55:32.572 [INFO][4192] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--949b84484--dd82b-eth0", GenerateName:"calico-apiserver-949b84484-", Namespace:"calico-apiserver", SelfLink:"", UID:"15572e0d-4f88-4d8f-84b3-bdf66f3f1702", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"949b84484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9", Pod:"calico-apiserver-949b84484-dd82b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4706d845ad", MAC:"fe:e2:10:6a:81:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:32.585417 containerd[1557]: 2025-05-10 09:55:32.580 [INFO][4192] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" Namespace="calico-apiserver" Pod="calico-apiserver-949b84484-dd82b" WorkloadEndpoint="localhost-k8s-calico--apiserver--949b84484--dd82b-eth0" May 10 09:55:32.611997 containerd[1557]: time="2025-05-10T09:55:32.611925054Z" level=info msg="connecting to shim 38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9" address="unix:///run/containerd/s/ec2e1f93d9641d4eea24af8590b8dd95b90758e0176856c050de6bb48d077e9a" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:32.637512 systemd[1]: Started cri-containerd-38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9.scope - libcontainer container 38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9. May 10 09:55:32.651581 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:32.729512 containerd[1557]: time="2025-05-10T09:55:32.729334945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-949b84484-dd82b,Uid:15572e0d-4f88-4d8f-84b3-bdf66f3f1702,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9\"" May 10 09:55:33.196449 systemd-networkd[1438]: cali3393077c4fe: Gained IPv6LL May 10 09:55:33.795222 containerd[1557]: time="2025-05-10T09:55:33.795110356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:33.796414 containerd[1557]: time="2025-05-10T09:55:33.796345713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 10 09:55:33.797880 containerd[1557]: time="2025-05-10T09:55:33.797835901Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:33.800242 containerd[1557]: time="2025-05-10T09:55:33.800184826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:33.800894 containerd[1557]: time="2025-05-10T09:55:33.800851232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.905888705s" May 10 09:55:33.800956 containerd[1557]: time="2025-05-10T09:55:33.800892329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 10 09:55:33.802084 containerd[1557]: time="2025-05-10T09:55:33.802058045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 09:55:33.812092 containerd[1557]: time="2025-05-10T09:55:33.810737837Z" level=info msg="CreateContainer within sandbox \"b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 09:55:33.819929 containerd[1557]: time="2025-05-10T09:55:33.819882705Z" level=info msg="Container facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:33.833683 containerd[1557]: time="2025-05-10T09:55:33.833628581Z" level=info msg="CreateContainer within sandbox \"b3baf681f830ff2137c5a78ba7bf24be448093bb60df3d3c815c74a7b8426778\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47\"" May 10 09:55:33.834167 containerd[1557]: time="2025-05-10T09:55:33.834116662Z" level=info msg="StartContainer for \"facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47\"" May 10 09:55:33.835156 containerd[1557]: time="2025-05-10T09:55:33.835105724Z" level=info msg="connecting to shim facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47" address="unix:///run/containerd/s/259d6254422fd7b8f0fdda95c58b2ee8a35e26f986185e012b0b95310cfcc1fa" protocol=ttrpc version=3 May 10 09:55:33.859376 systemd[1]: Started cri-containerd-facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47.scope - libcontainer container facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47. May 10 09:55:33.921249 containerd[1557]: time="2025-05-10T09:55:33.921198585Z" level=info msg="StartContainer for \"facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47\" returns successfully" May 10 09:55:34.156337 systemd-networkd[1438]: calid4706d845ad: Gained IPv6LL May 10 09:55:34.688174 kubelet[2671]: I0510 09:55:34.688065 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fbf9d7d57-x7rpn" podStartSLOduration=25.780769261 podStartE2EDuration="28.688043847s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:30.894672009 +0000 UTC m=+39.518275795" lastFinishedPulling="2025-05-10 09:55:33.801946595 +0000 UTC m=+42.425550381" observedRunningTime="2025-05-10 09:55:34.687550637 +0000 UTC m=+43.311154423" watchObservedRunningTime="2025-05-10 09:55:34.688043847 +0000 UTC m=+43.311647633" May 10 09:55:35.460951 containerd[1557]: time="2025-05-10T09:55:35.460887658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rcnv8,Uid:88cdf3c5-5b30-4a73-9732-439df695416e,Namespace:kube-system,Attempt:0,}" May 10 09:55:35.461432 containerd[1557]: time="2025-05-10T09:55:35.461000501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82ncp,Uid:d08ba91f-1a46-45d6-a939-9e2d9f0457e9,Namespace:calico-system,Attempt:0,}" May 10 09:55:35.461432 containerd[1557]: time="2025-05-10T09:55:35.461363054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9w4bk,Uid:19d8f44f-2085-4ef7-b080-05c9543f3869,Namespace:kube-system,Attempt:0,}" May 10 09:55:35.682342 kubelet[2671]: I0510 09:55:35.682288 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:35.720694 systemd-networkd[1438]: cali32844763759: Link UP May 10 09:55:35.720903 systemd-networkd[1438]: cali32844763759: Gained carrier May 10 09:55:35.734311 containerd[1557]: 2025-05-10 09:55:35.634 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--82ncp-eth0 csi-node-driver- calico-system d08ba91f-1a46-45d6-a939-9e2d9f0457e9 590 0 2025-05-10 09:55:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-82ncp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali32844763759 [] []}} ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-" May 10 09:55:35.734311 containerd[1557]: 2025-05-10 09:55:35.634 [INFO][4338] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.734311 containerd[1557]: 2025-05-10 09:55:35.672 [INFO][4368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" HandleID="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Workload="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.682 [INFO][4368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" HandleID="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Workload="localhost-k8s-csi--node--driver--82ncp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003936b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-82ncp", "timestamp":"2025-05-10 09:55:35.672820457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.682 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.682 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.683 [INFO][4368] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.686 [INFO][4368] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" host="localhost" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.689 [INFO][4368] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.693 [INFO][4368] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.695 [INFO][4368] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.697 [INFO][4368] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.734515 containerd[1557]: 2025-05-10 09:55:35.697 [INFO][4368] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" host="localhost" May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.699 [INFO][4368] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.703 [INFO][4368] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" host="localhost" May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.711 [INFO][4368] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" host="localhost" May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.711 [INFO][4368] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" host="localhost" May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.711 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:35.735716 containerd[1557]: 2025-05-10 09:55:35.711 [INFO][4368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" HandleID="k8s-pod-network.e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Workload="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.735850 containerd[1557]: 2025-05-10 09:55:35.717 [INFO][4338] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--82ncp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ba91f-1a46-45d6-a939-9e2d9f0457e9", ResourceVersion:"590", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-82ncp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32844763759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.735850 containerd[1557]: 2025-05-10 09:55:35.717 [INFO][4338] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.735930 containerd[1557]: 2025-05-10 09:55:35.717 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32844763759 ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.735930 containerd[1557]: 2025-05-10 09:55:35.721 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.735976 containerd[1557]: 2025-05-10 09:55:35.721 [INFO][4338] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--82ncp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ba91f-1a46-45d6-a939-9e2d9f0457e9", ResourceVersion:"590", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e", Pod:"csi-node-driver-82ncp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32844763759", MAC:"82:31:7d:19:cb:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.736028 containerd[1557]: 2025-05-10 09:55:35.730 [INFO][4338] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" Namespace="calico-system" Pod="csi-node-driver-82ncp" WorkloadEndpoint="localhost-k8s-csi--node--driver--82ncp-eth0" May 10 09:55:35.766064 containerd[1557]: time="2025-05-10T09:55:35.766005716Z" level=info msg="connecting to shim e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e" address="unix:///run/containerd/s/3a1a9ad1cbfc952106bfd8d92b2fc91fa7168f3370d89e874161a259dd89f01a" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:35.798613 systemd[1]: Started cri-containerd-e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e.scope - libcontainer container e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e. May 10 09:55:35.817204 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:35.834676 systemd-networkd[1438]: cali8c951e2788d: Link UP May 10 09:55:35.836410 systemd-networkd[1438]: cali8c951e2788d: Gained carrier May 10 09:55:35.845435 containerd[1557]: time="2025-05-10T09:55:35.845393167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-82ncp,Uid:d08ba91f-1a46-45d6-a939-9e2d9f0457e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e\"" May 10 09:55:35.855123 containerd[1557]: 2025-05-10 09:55:35.635 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0 coredns-6f6b679f8f- kube-system 19d8f44f-2085-4ef7-b080-05c9543f3869 676 0 2025-05-10 09:54:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-9w4bk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c951e2788d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-" May 10 09:55:35.855123 containerd[1557]: 2025-05-10 09:55:35.635 [INFO][4333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.855123 containerd[1557]: 2025-05-10 09:55:35.676 [INFO][4370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" HandleID="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Workload="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.687 [INFO][4370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" HandleID="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Workload="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001330b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-9w4bk", "timestamp":"2025-05-10 09:55:35.676398606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.687 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.712 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.712 [INFO][4370] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.788 [INFO][4370] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" host="localhost" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.796 [INFO][4370] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.802 [INFO][4370] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.805 [INFO][4370] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.808 [INFO][4370] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.856431 containerd[1557]: 2025-05-10 09:55:35.808 [INFO][4370] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" host="localhost" May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.810 [INFO][4370] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423 May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.814 [INFO][4370] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" host="localhost" May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.821 [INFO][4370] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" host="localhost" May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.821 [INFO][4370] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" host="localhost" May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.821 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:35.856811 containerd[1557]: 2025-05-10 09:55:35.821 [INFO][4370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" HandleID="k8s-pod-network.1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Workload="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.857083 containerd[1557]: 2025-05-10 09:55:35.825 [INFO][4333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"19d8f44f-2085-4ef7-b080-05c9543f3869", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-9w4bk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c951e2788d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.857269 containerd[1557]: 2025-05-10 09:55:35.826 [INFO][4333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.857269 containerd[1557]: 2025-05-10 09:55:35.826 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c951e2788d ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.857269 containerd[1557]: 2025-05-10 09:55:35.837 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.857390 containerd[1557]: 2025-05-10 09:55:35.838 [INFO][4333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"19d8f44f-2085-4ef7-b080-05c9543f3869", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423", Pod:"coredns-6f6b679f8f-9w4bk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c951e2788d", MAC:"5e:95:ad:59:fb:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.857390 containerd[1557]: 2025-05-10 09:55:35.850 [INFO][4333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" Namespace="kube-system" Pod="coredns-6f6b679f8f-9w4bk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9w4bk-eth0" May 10 09:55:35.883506 containerd[1557]: time="2025-05-10T09:55:35.883425694Z" level=info msg="connecting to shim 1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423" address="unix:///run/containerd/s/80d08510306b992e6feb419ffd9d211aea398c51f9a190c1c12ec621f235c4e0" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:35.910307 systemd[1]: Started cri-containerd-1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423.scope - libcontainer container 1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423. May 10 09:55:35.925541 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:35.940567 systemd-networkd[1438]: cali770d9934d84: Link UP May 10 09:55:35.941644 systemd-networkd[1438]: cali770d9934d84: Gained carrier May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.614 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0 coredns-6f6b679f8f- kube-system 88cdf3c5-5b30-4a73-9732-439df695416e 670 0 2025-05-10 09:54:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-rcnv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali770d9934d84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.614 [INFO][4319] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.677 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" HandleID="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Workload="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.688 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" HandleID="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Workload="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037f360), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-rcnv8", "timestamp":"2025-05-10 09:55:35.677779917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.688 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.824 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.824 [INFO][4360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.887 [INFO][4360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.897 [INFO][4360] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.903 [INFO][4360] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.905 [INFO][4360] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.910 [INFO][4360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.910 [INFO][4360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.913 [INFO][4360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0 May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.922 [INFO][4360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.932 [INFO][4360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.932 [INFO][4360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" host="localhost" May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.932 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 09:55:35.958433 containerd[1557]: 2025-05-10 09:55:35.932 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" HandleID="k8s-pod-network.8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Workload="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.937 [INFO][4319] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"88cdf3c5-5b30-4a73-9732-439df695416e", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-rcnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali770d9934d84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.937 [INFO][4319] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.937 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali770d9934d84 ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.942 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.942 [INFO][4319] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"88cdf3c5-5b30-4a73-9732-439df695416e", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 9, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0", Pod:"coredns-6f6b679f8f-rcnv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali770d9934d84", MAC:"aa:0b:27:27:d2:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 09:55:35.958986 containerd[1557]: 2025-05-10 09:55:35.953 [INFO][4319] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-rcnv8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--rcnv8-eth0" May 10 09:55:35.977018 containerd[1557]: time="2025-05-10T09:55:35.976886183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9w4bk,Uid:19d8f44f-2085-4ef7-b080-05c9543f3869,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423\"" May 10 09:55:35.980055 containerd[1557]: time="2025-05-10T09:55:35.980012562Z" level=info msg="CreateContainer within sandbox \"1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 09:55:35.995362 containerd[1557]: time="2025-05-10T09:55:35.995295207Z" level=info msg="Container 313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:36.002040 containerd[1557]: time="2025-05-10T09:55:36.001720259Z" level=info msg="connecting to shim 8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0" address="unix:///run/containerd/s/824d214af6fa62b05ae18859071c21d04a7c06fa4621831926270bb6156f99ca" namespace=k8s.io protocol=ttrpc version=3 May 10 09:55:36.030477 systemd[1]: Started cri-containerd-8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0.scope - libcontainer container 8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0. May 10 09:55:36.045735 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 09:55:36.133693 containerd[1557]: time="2025-05-10T09:55:36.133642893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rcnv8,Uid:88cdf3c5-5b30-4a73-9732-439df695416e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0\"" May 10 09:55:36.134522 containerd[1557]: time="2025-05-10T09:55:36.134317393Z" level=info msg="CreateContainer within sandbox \"1f15bc9c825cd809ccf6b5d18c93226975ce81cc524b36f9d6a93853fc715423\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae\"" May 10 09:55:36.135408 containerd[1557]: time="2025-05-10T09:55:36.134733607Z" level=info msg="StartContainer for \"313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae\"" May 10 09:55:36.136205 containerd[1557]: time="2025-05-10T09:55:36.136172687Z" level=info msg="CreateContainer within sandbox \"8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 09:55:36.136453 containerd[1557]: time="2025-05-10T09:55:36.136410876Z" level=info msg="connecting to shim 313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae" address="unix:///run/containerd/s/80d08510306b992e6feb419ffd9d211aea398c51f9a190c1c12ec621f235c4e0" protocol=ttrpc version=3 May 10 09:55:36.148272 containerd[1557]: time="2025-05-10T09:55:36.148198317Z" level=info msg="Container 40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:36.158742 containerd[1557]: time="2025-05-10T09:55:36.158681500Z" level=info msg="CreateContainer within sandbox \"8415325c5f4468704d9fb9b9bda0a00e3977d779ae1e40809004c165e55f03c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7\"" May 10 09:55:36.159376 containerd[1557]: time="2025-05-10T09:55:36.159340732Z" level=info msg="StartContainer for \"40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7\"" May 10 09:55:36.163548 containerd[1557]: time="2025-05-10T09:55:36.163439061Z" level=info msg="connecting to shim 40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7" address="unix:///run/containerd/s/824d214af6fa62b05ae18859071c21d04a7c06fa4621831926270bb6156f99ca" protocol=ttrpc version=3 May 10 09:55:36.165373 systemd[1]: Started cri-containerd-313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae.scope - libcontainer container 313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae. May 10 09:55:36.195368 systemd[1]: Started cri-containerd-40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7.scope - libcontainer container 40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7. May 10 09:55:36.220609 containerd[1557]: time="2025-05-10T09:55:36.220532080Z" level=info msg="StartContainer for \"313e6addc23bce02c4c2ae3f109953ba99a47b67a9b8701fbf76496852b686ae\" returns successfully" May 10 09:55:36.241311 containerd[1557]: time="2025-05-10T09:55:36.237912124Z" level=info msg="StartContainer for \"40a94a6dd27c40315c11fada08df337bb3a7eefe5ef07301af6dcf65984e6ad7\" returns successfully" May 10 09:55:36.662956 containerd[1557]: time="2025-05-10T09:55:36.662886994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:36.663696 containerd[1557]: time="2025-05-10T09:55:36.663656042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 10 09:55:36.664841 containerd[1557]: time="2025-05-10T09:55:36.664807591Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:36.666815 containerd[1557]: time="2025-05-10T09:55:36.666788211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:36.667524 containerd[1557]: time="2025-05-10T09:55:36.667476808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.865386842s" May 10 09:55:36.667524 containerd[1557]: time="2025-05-10T09:55:36.667509319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 10 09:55:36.668399 containerd[1557]: time="2025-05-10T09:55:36.668375721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 09:55:36.669713 containerd[1557]: time="2025-05-10T09:55:36.669675298Z" level=info msg="CreateContainer within sandbox \"9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 09:55:36.678600 containerd[1557]: time="2025-05-10T09:55:36.677965943Z" level=info msg="Container 8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:36.685959 containerd[1557]: time="2025-05-10T09:55:36.685906739Z" level=info msg="CreateContainer within sandbox \"9ac7b2081323950ae944a7d66a10bceccbf4dd58ed9d03b641412f821e556fba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642\"" May 10 09:55:36.686325 containerd[1557]: time="2025-05-10T09:55:36.686297885Z" level=info msg="StartContainer for \"8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642\"" May 10 09:55:36.687611 containerd[1557]: time="2025-05-10T09:55:36.687395132Z" level=info msg="connecting to shim 8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642" address="unix:///run/containerd/s/d893c67727b46940395f5e18f5dd9004d452aff8e81d1f25f5fa4499493ef70f" protocol=ttrpc version=3 May 10 09:55:36.701957 kubelet[2671]: I0510 09:55:36.701854 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rcnv8" podStartSLOduration=38.701832284 podStartE2EDuration="38.701832284s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:55:36.700781063 +0000 UTC m=+45.324384849" watchObservedRunningTime="2025-05-10 09:55:36.701832284 +0000 UTC m=+45.325436070" May 10 09:55:36.715570 systemd[1]: Started cri-containerd-8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642.scope - libcontainer container 8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642. May 10 09:55:36.785304 containerd[1557]: time="2025-05-10T09:55:36.784748197Z" level=info msg="StartContainer for \"8fb6efd183c8b7d2ff4a5c8a03555433dd2363fc4bf1ebfcbdbe96fd60d67642\" returns successfully" May 10 09:55:37.229358 systemd-networkd[1438]: cali32844763759: Gained IPv6LL May 10 09:55:37.292375 systemd-networkd[1438]: cali770d9934d84: Gained IPv6LL May 10 09:55:37.292791 systemd-networkd[1438]: cali8c951e2788d: Gained IPv6LL May 10 09:55:37.377984 containerd[1557]: time="2025-05-10T09:55:37.377910557Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:37.379031 containerd[1557]: time="2025-05-10T09:55:37.379002093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 09:55:37.381622 containerd[1557]: time="2025-05-10T09:55:37.381561142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 713.0843ms" May 10 09:55:37.381622 containerd[1557]: time="2025-05-10T09:55:37.381607439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 10 09:55:37.384165 containerd[1557]: time="2025-05-10T09:55:37.382983670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 09:55:37.384300 containerd[1557]: time="2025-05-10T09:55:37.384252401Z" level=info msg="CreateContainer within sandbox \"38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 09:55:37.399234 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:43320.service - OpenSSH per-connection server daemon (10.0.0.1:43320). May 10 09:55:37.449347 containerd[1557]: time="2025-05-10T09:55:37.449285795Z" level=info msg="Container 9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:37.457450 sshd[4694]: Accepted publickey for core from 10.0.0.1 port 43320 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:37.459788 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:37.465050 systemd-logind[1541]: New session 11 of user core. May 10 09:55:37.472317 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 09:55:37.860869 kubelet[2671]: I0510 09:55:37.860785 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9w4bk" podStartSLOduration=39.860761921 podStartE2EDuration="39.860761921s" podCreationTimestamp="2025-05-10 09:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 09:55:36.730360654 +0000 UTC m=+45.353964440" watchObservedRunningTime="2025-05-10 09:55:37.860761921 +0000 UTC m=+46.484365707" May 10 09:55:37.871341 sshd[4696]: Connection closed by 10.0.0.1 port 43320 May 10 09:55:37.871761 sshd-session[4694]: pam_unix(sshd:session): session closed for user core May 10 09:55:37.886875 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:43320.service: Deactivated successfully. May 10 09:55:37.889480 systemd[1]: session-11.scope: Deactivated successfully. May 10 09:55:37.892323 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. May 10 09:55:37.894014 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:43324.service - OpenSSH per-connection server daemon (10.0.0.1:43324). May 10 09:55:37.895163 systemd-logind[1541]: Removed session 11. May 10 09:55:37.944134 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:37.946090 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:37.953531 systemd-logind[1541]: New session 12 of user core. May 10 09:55:37.960302 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 09:55:38.037290 containerd[1557]: time="2025-05-10T09:55:38.037230322Z" level=info msg="CreateContainer within sandbox \"38aa93fd416a76a42880c9822c696a5e3b656a04bade0af5bac51f632da833e9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd\"" May 10 09:55:38.039738 containerd[1557]: time="2025-05-10T09:55:38.039364651Z" level=info msg="StartContainer for \"9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd\"" May 10 09:55:38.043037 containerd[1557]: time="2025-05-10T09:55:38.040693553Z" level=info msg="connecting to shim 9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd" address="unix:///run/containerd/s/ec2e1f93d9641d4eea24af8590b8dd95b90758e0176856c050de6bb48d077e9a" protocol=ttrpc version=3 May 10 09:55:38.066339 systemd[1]: Started cri-containerd-9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd.scope - libcontainer container 9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd. May 10 09:55:38.267261 sshd[4715]: Connection closed by 10.0.0.1 port 43324 May 10 09:55:38.272694 sshd-session[4710]: pam_unix(sshd:session): session closed for user core May 10 09:55:38.284412 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:43324.service: Deactivated successfully. May 10 09:55:38.288315 systemd[1]: session-12.scope: Deactivated successfully. May 10 09:55:38.289850 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. May 10 09:55:38.293316 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:43330.service - OpenSSH per-connection server daemon (10.0.0.1:43330). May 10 09:55:38.296561 systemd-logind[1541]: Removed session 12. May 10 09:55:38.348650 sshd[4759]: Accepted publickey for core from 10.0.0.1 port 43330 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:38.350513 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:38.355976 systemd-logind[1541]: New session 13 of user core. May 10 09:55:38.362286 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 09:55:38.379076 containerd[1557]: time="2025-05-10T09:55:38.379003321Z" level=info msg="StartContainer for \"9f0ec9d1797bbc6a35c734da9792f7eea0384463c2e7693b15c2b31384dbe6bd\" returns successfully" May 10 09:55:38.546399 sshd[4762]: Connection closed by 10.0.0.1 port 43330 May 10 09:55:38.547554 sshd-session[4759]: pam_unix(sshd:session): session closed for user core May 10 09:55:38.551955 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:43330.service: Deactivated successfully. May 10 09:55:38.556173 systemd[1]: session-13.scope: Deactivated successfully. May 10 09:55:38.559449 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. May 10 09:55:38.561872 systemd-logind[1541]: Removed session 13. May 10 09:55:38.700274 kubelet[2671]: I0510 09:55:38.699329 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:38.715439 kubelet[2671]: I0510 09:55:38.714523 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-949b84484-dd82b" podStartSLOduration=28.062996629 podStartE2EDuration="32.71449876s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:32.731247489 +0000 UTC m=+41.354851275" lastFinishedPulling="2025-05-10 09:55:37.38274962 +0000 UTC m=+46.006353406" observedRunningTime="2025-05-10 09:55:38.712250046 +0000 UTC m=+47.335853832" watchObservedRunningTime="2025-05-10 09:55:38.71449876 +0000 UTC m=+47.338102546" May 10 09:55:38.715439 kubelet[2671]: I0510 09:55:38.714883 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-949b84484-t574n" podStartSLOduration=27.752199737 podStartE2EDuration="32.714876722s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:31.705573671 +0000 UTC m=+40.329177457" lastFinishedPulling="2025-05-10 09:55:36.668250656 +0000 UTC m=+45.291854442" observedRunningTime="2025-05-10 09:55:37.864285146 +0000 UTC m=+46.487888932" watchObservedRunningTime="2025-05-10 09:55:38.714876722 +0000 UTC m=+47.338480508" May 10 09:55:39.393917 containerd[1557]: time="2025-05-10T09:55:39.393848688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:39.395123 containerd[1557]: time="2025-05-10T09:55:39.395092770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 10 09:55:39.396530 containerd[1557]: time="2025-05-10T09:55:39.396490813Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:39.398812 containerd[1557]: time="2025-05-10T09:55:39.398768321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:39.399455 containerd[1557]: time="2025-05-10T09:55:39.399428874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.016403344s" May 10 09:55:39.399516 containerd[1557]: time="2025-05-10T09:55:39.399461456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 10 09:55:39.405829 containerd[1557]: time="2025-05-10T09:55:39.401636001Z" level=info msg="CreateContainer within sandbox \"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 09:55:39.423835 containerd[1557]: time="2025-05-10T09:55:39.421581598Z" level=info msg="Container 036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:39.444755 containerd[1557]: time="2025-05-10T09:55:39.444694350Z" level=info msg="CreateContainer within sandbox \"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1\"" May 10 09:55:39.445331 containerd[1557]: time="2025-05-10T09:55:39.445296743Z" level=info msg="StartContainer for \"036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1\"" May 10 09:55:39.446964 containerd[1557]: time="2025-05-10T09:55:39.446937293Z" level=info msg="connecting to shim 036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1" address="unix:///run/containerd/s/3a1a9ad1cbfc952106bfd8d92b2fc91fa7168f3370d89e874161a259dd89f01a" protocol=ttrpc version=3 May 10 09:55:39.475309 systemd[1]: Started cri-containerd-036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1.scope - libcontainer container 036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1. May 10 09:55:39.530871 containerd[1557]: time="2025-05-10T09:55:39.530821306Z" level=info msg="StartContainer for \"036a566e188ddf94fafd913d45e4cbb2c67a32bea1418d12faab5ae8543535f1\" returns successfully" May 10 09:55:39.532040 containerd[1557]: time="2025-05-10T09:55:39.531985598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 09:55:39.702195 kubelet[2671]: I0510 09:55:39.702049 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:41.832815 containerd[1557]: time="2025-05-10T09:55:41.832756116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:41.833857 containerd[1557]: time="2025-05-10T09:55:41.833824869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 10 09:55:41.835273 containerd[1557]: time="2025-05-10T09:55:41.835227920Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:41.838188 containerd[1557]: time="2025-05-10T09:55:41.838129694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 09:55:41.838912 containerd[1557]: time="2025-05-10T09:55:41.838861791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.306843992s" May 10 09:55:41.838955 containerd[1557]: time="2025-05-10T09:55:41.838917636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 10 09:55:41.841317 containerd[1557]: time="2025-05-10T09:55:41.841279834Z" level=info msg="CreateContainer within sandbox \"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 09:55:41.850647 containerd[1557]: time="2025-05-10T09:55:41.850596074Z" level=info msg="Container 4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085: CDI devices from CRI Config.CDIDevices: []" May 10 09:55:41.861924 containerd[1557]: time="2025-05-10T09:55:41.861881863Z" level=info msg="CreateContainer within sandbox \"e6dc3078f5074b925bf398c96f470a31ca4dbfb31a1e1086abb9fc674c2ecc1e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085\"" May 10 09:55:41.862536 containerd[1557]: time="2025-05-10T09:55:41.862510296Z" level=info msg="StartContainer for \"4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085\"" May 10 09:55:41.864198 containerd[1557]: time="2025-05-10T09:55:41.864113825Z" level=info msg="connecting to shim 4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085" address="unix:///run/containerd/s/3a1a9ad1cbfc952106bfd8d92b2fc91fa7168f3370d89e874161a259dd89f01a" protocol=ttrpc version=3 May 10 09:55:41.887287 systemd[1]: Started cri-containerd-4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085.scope - libcontainer container 4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085. May 10 09:55:41.939757 containerd[1557]: time="2025-05-10T09:55:41.939699284Z" level=info msg="StartContainer for \"4cf93b89c465a01c4f981f4fd3137287197dbf0fcb5713e4768d9c53e9e49085\" returns successfully" May 10 09:55:42.534221 kubelet[2671]: I0510 09:55:42.534166 2671 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 09:55:42.534221 kubelet[2671]: I0510 09:55:42.534213 2671 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 09:55:43.561968 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:43344.service - OpenSSH per-connection server daemon (10.0.0.1:43344). May 10 09:55:43.629486 sshd[4864]: Accepted publickey for core from 10.0.0.1 port 43344 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:43.631538 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:43.637122 systemd-logind[1541]: New session 14 of user core. May 10 09:55:43.650284 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 09:55:43.781169 sshd[4866]: Connection closed by 10.0.0.1 port 43344 May 10 09:55:43.782059 sshd-session[4864]: pam_unix(sshd:session): session closed for user core May 10 09:55:43.787315 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:43344.service: Deactivated successfully. May 10 09:55:43.790030 systemd[1]: session-14.scope: Deactivated successfully. May 10 09:55:43.790846 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. May 10 09:55:43.792017 systemd-logind[1541]: Removed session 14. May 10 09:55:48.800481 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:39274.service - OpenSSH per-connection server daemon (10.0.0.1:39274). May 10 09:55:48.850752 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 39274 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:48.852422 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:48.857471 systemd-logind[1541]: New session 15 of user core. May 10 09:55:48.864286 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 09:55:48.984443 sshd[4883]: Connection closed by 10.0.0.1 port 39274 May 10 09:55:48.984801 sshd-session[4881]: pam_unix(sshd:session): session closed for user core May 10 09:55:48.997512 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:39274.service: Deactivated successfully. May 10 09:55:49.000632 systemd[1]: session-15.scope: Deactivated successfully. May 10 09:55:49.003439 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. May 10 09:55:49.005559 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:39284.service - OpenSSH per-connection server daemon (10.0.0.1:39284). May 10 09:55:49.007202 systemd-logind[1541]: Removed session 15. May 10 09:55:49.063118 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 39284 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:49.064636 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:49.070410 systemd-logind[1541]: New session 16 of user core. May 10 09:55:49.081294 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 09:55:49.378456 sshd[4898]: Connection closed by 10.0.0.1 port 39284 May 10 09:55:49.378786 sshd-session[4895]: pam_unix(sshd:session): session closed for user core May 10 09:55:49.389315 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:39284.service: Deactivated successfully. May 10 09:55:49.391553 systemd[1]: session-16.scope: Deactivated successfully. May 10 09:55:49.393262 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. May 10 09:55:49.394656 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:39294.service - OpenSSH per-connection server daemon (10.0.0.1:39294). May 10 09:55:49.395766 systemd-logind[1541]: Removed session 16. May 10 09:55:49.452644 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 39294 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:49.454249 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:49.459163 systemd-logind[1541]: New session 17 of user core. May 10 09:55:49.467292 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 09:55:49.669386 kubelet[2671]: I0510 09:55:49.669232 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:49.710055 containerd[1557]: time="2025-05-10T09:55:49.709928828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47\" id:\"1c0074b3f18d2f742c56dea8bc018aca133fce96afc4b482006ee17b37fe80ba\" pid:4931 exited_at:{seconds:1746870949 nanos:709669810}" May 10 09:55:49.723038 kubelet[2671]: I0510 09:55:49.722943 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-82ncp" podStartSLOduration=37.730058908 podStartE2EDuration="43.722924877s" podCreationTimestamp="2025-05-10 09:55:06 +0000 UTC" firstStartedPulling="2025-05-10 09:55:35.846778817 +0000 UTC m=+44.470382603" lastFinishedPulling="2025-05-10 09:55:41.839644786 +0000 UTC m=+50.463248572" observedRunningTime="2025-05-10 09:55:42.729661354 +0000 UTC m=+51.353265140" watchObservedRunningTime="2025-05-10 09:55:49.722924877 +0000 UTC m=+58.346528663" May 10 09:55:49.759460 containerd[1557]: time="2025-05-10T09:55:49.758923770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"facc87284af4418e572026c8fc843d84a26526716a4dd7dd6043daf423715a47\" id:\"a7047ca091af301e8f6c372ac50e1b05709295a6f7e5dc19cfaf8d03bb7b3db4\" pid:4953 exited_at:{seconds:1746870949 nanos:758738782}" May 10 09:55:50.182544 kubelet[2671]: I0510 09:55:50.182017 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:51.232691 sshd[4911]: Connection closed by 10.0.0.1 port 39294 May 10 09:55:51.234557 sshd-session[4908]: pam_unix(sshd:session): session closed for user core May 10 09:55:51.248409 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:39294.service: Deactivated successfully. May 10 09:55:51.251236 systemd[1]: session-17.scope: Deactivated successfully. May 10 09:55:51.251598 systemd[1]: session-17.scope: Consumed 605ms CPU time, 66.5M memory peak. May 10 09:55:51.255967 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. May 10 09:55:51.265238 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:39308.service - OpenSSH per-connection server daemon (10.0.0.1:39308). May 10 09:55:51.268837 systemd-logind[1541]: Removed session 17. May 10 09:55:51.322362 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:51.324214 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:51.328847 systemd-logind[1541]: New session 18 of user core. May 10 09:55:51.340278 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 09:55:51.579049 sshd[4988]: Connection closed by 10.0.0.1 port 39308 May 10 09:55:51.580522 sshd-session[4985]: pam_unix(sshd:session): session closed for user core May 10 09:55:51.589464 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:39308.service: Deactivated successfully. May 10 09:55:51.592607 systemd[1]: session-18.scope: Deactivated successfully. May 10 09:55:51.595534 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. May 10 09:55:51.599905 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:39314.service - OpenSSH per-connection server daemon (10.0.0.1:39314). May 10 09:55:51.601898 systemd-logind[1541]: Removed session 18. May 10 09:55:51.652420 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 39314 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:51.654397 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:51.660675 systemd-logind[1541]: New session 19 of user core. May 10 09:55:51.665416 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 09:55:51.781632 sshd[5003]: Connection closed by 10.0.0.1 port 39314 May 10 09:55:51.781981 sshd-session[5000]: pam_unix(sshd:session): session closed for user core May 10 09:55:51.786873 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:39314.service: Deactivated successfully. May 10 09:55:51.790069 systemd[1]: session-19.scope: Deactivated successfully. May 10 09:55:51.791795 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. May 10 09:55:51.793551 systemd-logind[1541]: Removed session 19. May 10 09:55:55.784701 containerd[1557]: time="2025-05-10T09:55:55.784639794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff0812f962a1b28101760b2f1d04c567aa9b3d0fef034d3d8e15f95c7425e796\" id:\"c8a5d1651c19318bff394a1e4d09010e4ff7f08fd1ab8f486b54f05f11c0bbc9\" pid:5026 exited_at:{seconds:1746870955 nanos:784248498}" May 10 09:55:56.712739 kubelet[2671]: I0510 09:55:56.712681 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 09:55:56.803833 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:57540.service - OpenSSH per-connection server daemon (10.0.0.1:57540). May 10 09:55:56.864243 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 57540 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:55:56.865898 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:55:56.870594 systemd-logind[1541]: New session 20 of user core. May 10 09:55:56.888271 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 09:55:57.003041 sshd[5049]: Connection closed by 10.0.0.1 port 57540 May 10 09:55:57.003379 sshd-session[5047]: pam_unix(sshd:session): session closed for user core May 10 09:55:57.008352 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:57540.service: Deactivated successfully. May 10 09:55:57.010669 systemd[1]: session-20.scope: Deactivated successfully. May 10 09:55:57.011469 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. May 10 09:55:57.012573 systemd-logind[1541]: Removed session 20. May 10 09:56:02.016590 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:57556.service - OpenSSH per-connection server daemon (10.0.0.1:57556). May 10 09:56:02.064287 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 57556 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:02.066041 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:02.070608 systemd-logind[1541]: New session 21 of user core. May 10 09:56:02.080281 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 09:56:02.196204 sshd[5067]: Connection closed by 10.0.0.1 port 57556 May 10 09:56:02.196547 sshd-session[5065]: pam_unix(sshd:session): session closed for user core May 10 09:56:02.201073 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:57556.service: Deactivated successfully. May 10 09:56:02.203519 systemd[1]: session-21.scope: Deactivated successfully. May 10 09:56:02.204498 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. May 10 09:56:02.205522 systemd-logind[1541]: Removed session 21. May 10 09:56:07.221303 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:37234.service - OpenSSH per-connection server daemon (10.0.0.1:37234). May 10 09:56:07.273636 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 37234 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:07.275443 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:07.280472 systemd-logind[1541]: New session 22 of user core. May 10 09:56:07.288430 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 09:56:07.425191 sshd[5085]: Connection closed by 10.0.0.1 port 37234 May 10 09:56:07.425613 sshd-session[5083]: pam_unix(sshd:session): session closed for user core May 10 09:56:07.429444 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:37234.service: Deactivated successfully. May 10 09:56:07.432088 systemd[1]: session-22.scope: Deactivated successfully. May 10 09:56:07.433974 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. May 10 09:56:07.435096 systemd-logind[1541]: Removed session 22. May 10 09:56:12.441694 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:37236.service - OpenSSH per-connection server daemon (10.0.0.1:37236). May 10 09:56:12.510613 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:ZOMyKPM9vG3Y5Dtmxr1HvCP8ZBfjY8jTU8Db0jmo1gs May 10 09:56:12.512460 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 09:56:12.517014 systemd-logind[1541]: New session 23 of user core. May 10 09:56:12.525288 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 09:56:12.648475 sshd[5107]: Connection closed by 10.0.0.1 port 37236 May 10 09:56:12.648828 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 10 09:56:12.653824 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:37236.service: Deactivated successfully. May 10 09:56:12.656459 systemd[1]: session-23.scope: Deactivated successfully. May 10 09:56:12.657449 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. May 10 09:56:12.658572 systemd-logind[1541]: Removed session 23.