May 16 03:42:55.094353 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 16 03:42:55.094385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 03:42:55.094397 kernel: BIOS-provided physical RAM map: May 16 03:42:55.094405 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 03:42:55.094413 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 03:42:55.094423 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 03:42:55.094433 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 16 03:42:55.094442 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 16 03:42:55.094450 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 03:42:55.094459 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 03:42:55.094467 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 16 03:42:55.094476 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 03:42:55.094484 kernel: NX (Execute Disable) protection: active May 16 03:42:55.094493 kernel: APIC: Static calls initialized May 16 03:42:55.094504 kernel: SMBIOS 3.0.0 present. May 16 03:42:55.094513 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 16 03:42:55.094521 kernel: Hypervisor detected: KVM May 16 03:42:55.094530 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 03:42:55.094539 kernel: kvm-clock: using sched offset of 3733172359 cycles May 16 03:42:55.094548 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 03:42:55.094559 kernel: tsc: Detected 1996.249 MHz processor May 16 03:42:55.094568 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 03:42:55.094579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 03:42:55.094588 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 16 03:42:55.094597 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 03:42:55.094606 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 03:42:55.094615 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 16 03:42:55.094625 kernel: ACPI: Early table checksum verification disabled May 16 03:42:55.094637 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 16 03:42:55.094646 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 03:42:55.094655 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 03:42:55.094664 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 03:42:55.094673 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 16 03:42:55.094682 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 03:42:55.094691 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 03:42:55.094700 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 16 03:42:55.094709 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 16 03:42:55.094720 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 16 03:42:55.094729 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 16 03:42:55.094738 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 16 03:42:55.094750 kernel: No NUMA configuration found May 16 03:42:55.094760 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 16 03:42:55.094769 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 16 03:42:55.094778 kernel: Zone ranges: May 16 03:42:55.094789 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 03:42:55.094799 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 16 03:42:55.094808 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 16 03:42:55.094818 kernel: Movable zone start for each node May 16 03:42:55.094827 kernel: Early memory node ranges May 16 03:42:55.094836 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 03:42:55.094845 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 16 03:42:55.094854 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 16 03:42:55.094865 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 16 03:42:55.094875 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 03:42:55.094884 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 03:42:55.094894 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 16 03:42:55.094903 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 03:42:55.094913 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 03:42:55.094922 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 03:42:55.094931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 03:42:55.094941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 03:42:55.094952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 03:42:55.094962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 03:42:55.094971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 03:42:55.094980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 03:42:55.094990 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 03:42:55.094999 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 03:42:55.095008 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 16 03:42:55.095017 kernel: Booting paravirtualized kernel on KVM May 16 03:42:55.095027 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 03:42:55.095038 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 03:42:55.095048 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 03:42:55.095057 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 03:42:55.095066 kernel: pcpu-alloc: [0] 0 1 May 16 03:42:55.095075 kernel: kvm-guest: PV spinlocks disabled, no host support May 16 03:42:55.095086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 03:42:55.095096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 03:42:55.095105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 03:42:55.095116 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 03:42:55.095126 kernel: Fallback order for Node 0: 0 May 16 03:42:55.095135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 16 03:42:55.095144 kernel: Policy zone: Normal May 16 03:42:55.095153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 03:42:55.095163 kernel: software IO TLB: area num 2. May 16 03:42:55.095172 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 231404K reserved, 0K cma-reserved) May 16 03:42:55.095182 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 03:42:55.095191 kernel: ftrace: allocating 37997 entries in 149 pages May 16 03:42:55.095202 kernel: ftrace: allocated 149 pages with 4 groups May 16 03:42:55.095212 kernel: Dynamic Preempt: voluntary May 16 03:42:55.095221 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 03:42:55.095234 kernel: rcu: RCU event tracing is enabled. May 16 03:42:55.095244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 03:42:55.095254 kernel: Trampoline variant of Tasks RCU enabled. May 16 03:42:55.095264 kernel: Rude variant of Tasks RCU enabled. May 16 03:42:55.095273 kernel: Tracing variant of Tasks RCU enabled. May 16 03:42:55.095283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 03:42:55.095294 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 03:42:55.095303 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 03:42:55.096374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 03:42:55.096390 kernel: Console: colour VGA+ 80x25 May 16 03:42:55.096399 kernel: printk: console [tty0] enabled May 16 03:42:55.096409 kernel: printk: console [ttyS0] enabled May 16 03:42:55.096419 kernel: ACPI: Core revision 20230628 May 16 03:42:55.096429 kernel: APIC: Switch to symmetric I/O mode setup May 16 03:42:55.096438 kernel: x2apic enabled May 16 03:42:55.096451 kernel: APIC: Switched APIC routing to: physical x2apic May 16 03:42:55.096461 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 03:42:55.096471 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 03:42:55.096480 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 16 03:42:55.096490 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 16 03:42:55.096499 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 16 03:42:55.096509 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 03:42:55.096519 kernel: Spectre V2 : Mitigation: Retpolines May 16 03:42:55.096528 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 03:42:55.096539 kernel: Speculative Store Bypass: Vulnerable May 16 03:42:55.096549 kernel: x86/fpu: x87 FPU will use FXSAVE May 16 03:42:55.096558 kernel: Freeing SMP alternatives memory: 32K May 16 03:42:55.096568 kernel: pid_max: default: 32768 minimum: 301 May 16 03:42:55.096584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 03:42:55.096595 kernel: landlock: Up and running. May 16 03:42:55.096605 kernel: SELinux: Initializing. May 16 03:42:55.096615 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 03:42:55.096625 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 03:42:55.096635 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 16 03:42:55.096645 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 03:42:55.096655 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 03:42:55.096667 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 03:42:55.096677 kernel: Performance Events: AMD PMU driver. May 16 03:42:55.096687 kernel: ... version: 0 May 16 03:42:55.096697 kernel: ... bit width: 48 May 16 03:42:55.096708 kernel: ... generic registers: 4 May 16 03:42:55.096718 kernel: ... value mask: 0000ffffffffffff May 16 03:42:55.096727 kernel: ... max period: 00007fffffffffff May 16 03:42:55.096738 kernel: ... fixed-purpose events: 0 May 16 03:42:55.096748 kernel: ... event mask: 000000000000000f May 16 03:42:55.096757 kernel: signal: max sigframe size: 1440 May 16 03:42:55.096767 kernel: rcu: Hierarchical SRCU implementation. May 16 03:42:55.096777 kernel: rcu: Max phase no-delay instances is 400. May 16 03:42:55.096787 kernel: smp: Bringing up secondary CPUs ... May 16 03:42:55.096797 kernel: smpboot: x86: Booting SMP configuration: May 16 03:42:55.096808 kernel: .... node #0, CPUs: #1 May 16 03:42:55.096818 kernel: smp: Brought up 1 node, 2 CPUs May 16 03:42:55.096828 kernel: smpboot: Max logical packages: 2 May 16 03:42:55.096838 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 16 03:42:55.096848 kernel: devtmpfs: initialized May 16 03:42:55.096858 kernel: x86/mm: Memory block size: 128MB May 16 03:42:55.096868 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 03:42:55.096878 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 03:42:55.096888 kernel: pinctrl core: initialized pinctrl subsystem May 16 03:42:55.096899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 03:42:55.096909 kernel: audit: initializing netlink subsys (disabled) May 16 03:42:55.096920 kernel: audit: type=2000 audit(1747366974.269:1): state=initialized audit_enabled=0 res=1 May 16 03:42:55.096929 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 03:42:55.096939 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 03:42:55.096949 kernel: cpuidle: using governor menu May 16 03:42:55.096958 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 03:42:55.096969 kernel: dca service started, version 1.12.1 May 16 03:42:55.096978 kernel: PCI: Using configuration type 1 for base access May 16 03:42:55.096990 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 03:42:55.097000 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 03:42:55.097011 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 03:42:55.097021 kernel: ACPI: Added _OSI(Module Device) May 16 03:42:55.097030 kernel: ACPI: Added _OSI(Processor Device) May 16 03:42:55.097040 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 03:42:55.097050 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 03:42:55.097060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 03:42:55.097069 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 03:42:55.097081 kernel: ACPI: Interpreter enabled May 16 03:42:55.097091 kernel: ACPI: PM: (supports S0 S3 S5) May 16 03:42:55.097101 kernel: ACPI: Using IOAPIC for interrupt routing May 16 03:42:55.097111 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 03:42:55.097121 kernel: PCI: Using E820 reservations for host bridge windows May 16 03:42:55.097131 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 16 03:42:55.097140 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 03:42:55.097293 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 16 03:42:55.098215 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 16 03:42:55.098338 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 16 03:42:55.098354 kernel: acpiphp: Slot [3] registered May 16 03:42:55.098364 kernel: acpiphp: Slot [4] registered May 16 03:42:55.098375 kernel: acpiphp: Slot [5] registered May 16 03:42:55.098385 kernel: acpiphp: Slot [6] registered May 16 03:42:55.098394 kernel: acpiphp: Slot [7] registered May 16 03:42:55.098404 kernel: acpiphp: Slot [8] registered May 16 03:42:55.098432 kernel: acpiphp: Slot [9] registered May 16 03:42:55.098464 kernel: acpiphp: Slot [10] registered May 16 03:42:55.098501 kernel: acpiphp: Slot [11] registered May 16 03:42:55.098534 kernel: acpiphp: Slot [12] registered May 16 03:42:55.098567 kernel: acpiphp: Slot [13] registered May 16 03:42:55.098601 kernel: acpiphp: Slot [14] registered May 16 03:42:55.098640 kernel: acpiphp: Slot [15] registered May 16 03:42:55.098674 kernel: acpiphp: Slot [16] registered May 16 03:42:55.098711 kernel: acpiphp: Slot [17] registered May 16 03:42:55.098742 kernel: acpiphp: Slot [18] registered May 16 03:42:55.098785 kernel: acpiphp: Slot [19] registered May 16 03:42:55.098818 kernel: acpiphp: Slot [20] registered May 16 03:42:55.098856 kernel: acpiphp: Slot [21] registered May 16 03:42:55.098891 kernel: acpiphp: Slot [22] registered May 16 03:42:55.098924 kernel: acpiphp: Slot [23] registered May 16 03:42:55.098956 kernel: acpiphp: Slot [24] registered May 16 03:42:55.098993 kernel: acpiphp: Slot [25] registered May 16 03:42:55.099025 kernel: acpiphp: Slot [26] registered May 16 03:42:55.099059 kernel: acpiphp: Slot [27] registered May 16 03:42:55.099102 kernel: acpiphp: Slot [28] registered May 16 03:42:55.099135 kernel: acpiphp: Slot [29] registered May 16 03:42:55.099157 kernel: acpiphp: Slot [30] registered May 16 03:42:55.099167 kernel: acpiphp: Slot [31] registered May 16 03:42:55.099177 kernel: PCI host bridge to bus 0000:00 May 16 03:42:55.099285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 03:42:55.101512 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 03:42:55.101625 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 03:42:55.101727 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 03:42:55.101819 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 16 03:42:55.101911 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 03:42:55.102041 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 16 03:42:55.102161 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 16 03:42:55.102274 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 16 03:42:55.103442 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 16 03:42:55.103613 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 16 03:42:55.105454 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 16 03:42:55.105557 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 16 03:42:55.105651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 16 03:42:55.105753 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 16 03:42:55.105858 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 16 03:42:55.105967 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 16 03:42:55.106077 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 16 03:42:55.106183 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 16 03:42:55.106288 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 16 03:42:55.108439 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 16 03:42:55.108592 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 16 03:42:55.108693 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 03:42:55.108806 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 16 03:42:55.108905 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 16 03:42:55.109002 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 16 03:42:55.109099 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 16 03:42:55.109196 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 16 03:42:55.109302 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 16 03:42:55.109432 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 16 03:42:55.109545 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 16 03:42:55.109651 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 16 03:42:55.109766 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 16 03:42:55.109871 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 16 03:42:55.109975 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 16 03:42:55.110086 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 16 03:42:55.110192 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 16 03:42:55.110304 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 16 03:42:55.110928 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 16 03:42:55.110945 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 03:42:55.110955 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 03:42:55.110965 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 03:42:55.110975 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 03:42:55.110985 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 16 03:42:55.110995 kernel: iommu: Default domain type: Translated May 16 03:42:55.111009 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 03:42:55.111019 kernel: PCI: Using ACPI for IRQ routing May 16 03:42:55.111029 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 03:42:55.111039 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 03:42:55.111049 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 16 03:42:55.111153 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 16 03:42:55.111256 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 16 03:42:55.111411 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 03:42:55.111427 kernel: vgaarb: loaded May 16 03:42:55.111441 kernel: clocksource: Switched to clocksource kvm-clock May 16 03:42:55.111452 kernel: VFS: Disk quotas dquot_6.6.0 May 16 03:42:55.111461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 03:42:55.111471 kernel: pnp: PnP ACPI init May 16 03:42:55.111596 kernel: pnp 00:03: [dma 2] May 16 03:42:55.111614 kernel: pnp: PnP ACPI: found 5 devices May 16 03:42:55.111627 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 03:42:55.111636 kernel: NET: Registered PF_INET protocol family May 16 03:42:55.111649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 03:42:55.111658 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 03:42:55.111668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 03:42:55.111677 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 03:42:55.111686 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 03:42:55.111695 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 03:42:55.111705 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 03:42:55.111714 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 03:42:55.111723 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 03:42:55.111734 kernel: NET: Registered PF_XDP protocol family May 16 03:42:55.111821 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 03:42:55.111907 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 03:42:55.111989 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 03:42:55.112070 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 16 03:42:55.112152 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 16 03:42:55.112249 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 16 03:42:55.112379 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 03:42:55.112398 kernel: PCI: CLS 0 bytes, default 64 May 16 03:42:55.112408 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 16 03:42:55.112417 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 16 03:42:55.112426 kernel: Initialise system trusted keyrings May 16 03:42:55.112435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 03:42:55.112445 kernel: Key type asymmetric registered May 16 03:42:55.112454 kernel: Asymmetric key parser 'x509' registered May 16 03:42:55.112463 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 03:42:55.112474 kernel: io scheduler mq-deadline registered May 16 03:42:55.112483 kernel: io scheduler kyber registered May 16 03:42:55.112492 kernel: io scheduler bfq registered May 16 03:42:55.112501 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 03:42:55.112511 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 16 03:42:55.112521 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 16 03:42:55.112530 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 16 03:42:55.112539 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 16 03:42:55.112548 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 03:42:55.112557 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 03:42:55.112568 kernel: random: crng init done May 16 03:42:55.112578 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 03:42:55.112587 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 03:42:55.112596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 03:42:55.112690 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 03:42:55.112705 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 03:42:55.112789 kernel: rtc_cmos 00:04: registered as rtc0 May 16 03:42:55.112874 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T03:42:54 UTC (1747366974) May 16 03:42:55.112964 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 03:42:55.112978 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 03:42:55.112987 kernel: NET: Registered PF_INET6 protocol family May 16 03:42:55.112996 kernel: Segment Routing with IPv6 May 16 03:42:55.113005 kernel: In-situ OAM (IOAM) with IPv6 May 16 03:42:55.113015 kernel: NET: Registered PF_PACKET protocol family May 16 03:42:55.113024 kernel: Key type dns_resolver registered May 16 03:42:55.113033 kernel: IPI shorthand broadcast: enabled May 16 03:42:55.113042 kernel: sched_clock: Marking stable (1056007780, 180215181)->(1267691956, -31468995) May 16 03:42:55.113054 kernel: registered taskstats version 1 May 16 03:42:55.113064 kernel: Loading compiled-in X.509 certificates May 16 03:42:55.113073 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 16 03:42:55.113082 kernel: Key type .fscrypt registered May 16 03:42:55.113091 kernel: Key type fscrypt-provisioning registered May 16 03:42:55.113100 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 03:42:55.113109 kernel: ima: Allocated hash algorithm: sha1 May 16 03:42:55.113118 kernel: ima: No architecture policies found May 16 03:42:55.113129 kernel: clk: Disabling unused clocks May 16 03:42:55.113138 kernel: Freeing unused kernel image (initmem) memory: 43600K May 16 03:42:55.113148 kernel: Write protecting the kernel read-only data: 40960k May 16 03:42:55.113157 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 16 03:42:55.113166 kernel: Run /init as init process May 16 03:42:55.113175 kernel: with arguments: May 16 03:42:55.113184 kernel: /init May 16 03:42:55.113193 kernel: with environment: May 16 03:42:55.113201 kernel: HOME=/ May 16 03:42:55.113212 kernel: TERM=linux May 16 03:42:55.113221 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 03:42:55.113231 systemd[1]: Successfully made /usr/ read-only. May 16 03:42:55.113244 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 03:42:55.113255 systemd[1]: Detected virtualization kvm. May 16 03:42:55.113264 systemd[1]: Detected architecture x86-64. May 16 03:42:55.113275 systemd[1]: Running in initrd. May 16 03:42:55.113288 systemd[1]: No hostname configured, using default hostname. May 16 03:42:55.113299 systemd[1]: Hostname set to . May 16 03:42:55.113309 systemd[1]: Initializing machine ID from VM UUID. May 16 03:42:55.113354 systemd[1]: Queued start job for default target initrd.target. May 16 03:42:55.113366 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 03:42:55.113377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 03:42:55.113388 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 03:42:55.113409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 03:42:55.113422 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 03:42:55.113433 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 03:42:55.113445 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 03:42:55.113457 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 03:42:55.113467 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 03:42:55.113480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 03:42:55.113491 systemd[1]: Reached target paths.target - Path Units. May 16 03:42:55.113503 systemd[1]: Reached target slices.target - Slice Units. May 16 03:42:55.113513 systemd[1]: Reached target swap.target - Swaps. May 16 03:42:55.113524 systemd[1]: Reached target timers.target - Timer Units. May 16 03:42:55.113534 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 03:42:55.113546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 03:42:55.113557 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 03:42:55.113569 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 03:42:55.113580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 03:42:55.113591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 03:42:55.113602 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 03:42:55.113613 systemd[1]: Reached target sockets.target - Socket Units. May 16 03:42:55.113624 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 03:42:55.113636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 03:42:55.113646 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 03:42:55.113657 systemd[1]: Starting systemd-fsck-usr.service... May 16 03:42:55.113670 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 03:42:55.113681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 03:42:55.113692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:42:55.113703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 03:42:55.113714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 03:42:55.113726 systemd[1]: Finished systemd-fsck-usr.service. May 16 03:42:55.113738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 03:42:55.113772 systemd-journald[184]: Collecting audit messages is disabled. May 16 03:42:55.113803 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 03:42:55.113814 kernel: Bridge firewalling registered May 16 03:42:55.113825 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 03:42:55.113836 systemd-journald[184]: Journal started May 16 03:42:55.113863 systemd-journald[184]: Runtime Journal (/run/log/journal/2e75342014d843fb87d41623e9e51199) is 8M, max 78.2M, 70.2M free. May 16 03:42:55.065128 systemd-modules-load[186]: Inserted module 'overlay' May 16 03:42:55.152042 systemd[1]: Started systemd-journald.service - Journal Service. May 16 03:42:55.103827 systemd-modules-load[186]: Inserted module 'br_netfilter' May 16 03:42:55.152736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:42:55.153739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 03:42:55.157731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 03:42:55.160448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 03:42:55.161614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 03:42:55.165274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 03:42:55.187381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 03:42:55.190601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 03:42:55.192493 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 03:42:55.193218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 03:42:55.195728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 03:42:55.209440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 03:42:55.219676 dracut-cmdline[220]: dracut-dracut-053 May 16 03:42:55.221768 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 03:42:55.252021 systemd-resolved[221]: Positive Trust Anchors: May 16 03:42:55.252035 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 03:42:55.252075 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 03:42:55.258491 systemd-resolved[221]: Defaulting to hostname 'linux'. May 16 03:42:55.259424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 03:42:55.260242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 03:42:55.286395 kernel: SCSI subsystem initialized May 16 03:42:55.297374 kernel: Loading iSCSI transport class v2.0-870. May 16 03:42:55.310389 kernel: iscsi: registered transport (tcp) May 16 03:42:55.333915 kernel: iscsi: registered transport (qla4xxx) May 16 03:42:55.333979 kernel: QLogic iSCSI HBA Driver May 16 03:42:55.390246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 03:42:55.393635 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 03:42:55.451734 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 03:42:55.451809 kernel: device-mapper: uevent: version 1.0.3 May 16 03:42:55.454534 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 03:42:55.511365 kernel: raid6: sse2x4 gen() 5692 MB/s May 16 03:42:55.530427 kernel: raid6: sse2x2 gen() 6382 MB/s May 16 03:42:55.548904 kernel: raid6: sse2x1 gen() 9166 MB/s May 16 03:42:55.548982 kernel: raid6: using algorithm sse2x1 gen() 9166 MB/s May 16 03:42:55.567839 kernel: raid6: .... xor() 7396 MB/s, rmw enabled May 16 03:42:55.567902 kernel: raid6: using ssse3x2 recovery algorithm May 16 03:42:55.591062 kernel: xor: measuring software checksum speed May 16 03:42:55.591126 kernel: prefetch64-sse : 17022 MB/sec May 16 03:42:55.591630 kernel: generic_sse : 15502 MB/sec May 16 03:42:55.594033 kernel: xor: using function: prefetch64-sse (17022 MB/sec) May 16 03:42:55.777394 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 03:42:55.793523 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 03:42:55.799272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 03:42:55.827221 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 16 03:42:55.832674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 03:42:55.838672 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 03:42:55.868583 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 16 03:42:55.923007 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 03:42:55.927404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 03:42:56.019705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 03:42:56.027779 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 03:42:56.061122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 03:42:56.064382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 03:42:56.064978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 03:42:56.067782 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 03:42:56.070052 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 03:42:56.106353 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 16 03:42:56.106394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 03:42:56.114614 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 16 03:42:56.132870 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 03:42:56.132913 kernel: GPT:17805311 != 20971519 May 16 03:42:56.132926 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 03:42:56.136569 kernel: GPT:17805311 != 20971519 May 16 03:42:56.136609 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 03:42:56.140437 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 03:42:56.162175 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 03:42:56.162466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 03:42:56.166008 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 03:42:56.167674 kernel: libata version 3.00 loaded. May 16 03:42:56.166892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 03:42:56.167043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:42:56.169958 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:42:56.172661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:42:56.180467 kernel: ata_piix 0000:00:01.1: version 2.13 May 16 03:42:56.180673 kernel: scsi host0: ata_piix May 16 03:42:56.180830 kernel: scsi host1: ata_piix May 16 03:42:56.179524 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 03:42:56.196209 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 16 03:42:56.196244 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 16 03:42:56.203347 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) May 16 03:42:56.217362 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (468) May 16 03:42:56.253130 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 03:42:56.275200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:42:56.295192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 03:42:56.306248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 03:42:56.315005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 03:42:56.315601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 03:42:56.319523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 03:42:56.321491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 03:42:56.348875 disk-uuid[503]: Primary Header is updated. May 16 03:42:56.348875 disk-uuid[503]: Secondary Entries is updated. May 16 03:42:56.348875 disk-uuid[503]: Secondary Header is updated. May 16 03:42:56.355011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 03:42:56.359334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 03:42:56.375385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 03:42:57.375384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 03:42:57.376596 disk-uuid[510]: The operation has completed successfully. May 16 03:42:57.454544 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 03:42:57.454636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 03:42:57.503272 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 03:42:57.524261 sh[523]: Success May 16 03:42:57.545467 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 16 03:42:57.624689 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 03:42:57.631500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 03:42:57.653679 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 03:42:57.683610 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 16 03:42:57.683690 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 03:42:57.688295 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 03:42:57.693186 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 03:42:57.696918 kernel: BTRFS info (device dm-0): using free space tree May 16 03:42:57.717125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 03:42:57.719268 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 03:42:57.722827 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 03:42:57.728550 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 03:42:57.769391 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 03:42:57.777619 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 03:42:57.777662 kernel: BTRFS info (device vda6): using free space tree May 16 03:42:57.788585 kernel: BTRFS info (device vda6): auto enabling async discard May 16 03:42:57.796392 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 03:42:57.805644 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 03:42:57.811068 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 03:42:57.867000 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 03:42:57.870811 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 03:42:57.918705 systemd-networkd[703]: lo: Link UP May 16 03:42:57.919347 systemd-networkd[703]: lo: Gained carrier May 16 03:42:57.920838 systemd-networkd[703]: Enumeration completed May 16 03:42:57.920947 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 03:42:57.921249 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 03:42:57.921253 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 03:42:57.922056 systemd-networkd[703]: eth0: Link UP May 16 03:42:57.922060 systemd-networkd[703]: eth0: Gained carrier May 16 03:42:57.922068 systemd-networkd[703]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 03:42:57.922670 systemd[1]: Reached target network.target - Network. May 16 03:42:57.944388 systemd-networkd[703]: eth0: DHCPv4 address 172.24.4.212/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 03:42:57.966960 ignition[646]: Ignition 2.20.0 May 16 03:42:57.966973 ignition[646]: Stage: fetch-offline May 16 03:42:57.967012 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 16 03:42:57.967022 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:42:57.969170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 03:42:57.967119 ignition[646]: parsed url from cmdline: "" May 16 03:42:57.971487 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 03:42:57.967123 ignition[646]: no config URL provided May 16 03:42:57.967128 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 16 03:42:57.967137 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 16 03:42:57.967142 ignition[646]: failed to fetch config: resource requires networking May 16 03:42:57.967348 ignition[646]: Ignition finished successfully May 16 03:42:57.990423 ignition[712]: Ignition 2.20.0 May 16 03:42:57.990437 ignition[712]: Stage: fetch May 16 03:42:57.990614 ignition[712]: no configs at "/usr/lib/ignition/base.d" May 16 03:42:57.990626 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:42:57.990714 ignition[712]: parsed url from cmdline: "" May 16 03:42:57.990718 ignition[712]: no config URL provided May 16 03:42:57.990724 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" May 16 03:42:57.990732 ignition[712]: no config at "/usr/lib/ignition/user.ign" May 16 03:42:57.990858 ignition[712]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 16 03:42:57.990931 ignition[712]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 16 03:42:57.990961 ignition[712]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 16 03:42:58.264472 systemd-resolved[221]: Detected conflict on linux IN A 172.24.4.212 May 16 03:42:58.264492 systemd-resolved[221]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 16 03:42:58.372298 ignition[712]: GET result: OK May 16 03:42:58.372564 ignition[712]: parsing config with SHA512: c8be7910c1d7397e7533ed1f875954293b26baf7809903f4686286a1eb25a85f23ce55142b8322c17264a736d655301fb007695deb4478a3ddaf7a11746bfddd May 16 03:42:58.383941 unknown[712]: fetched base config from "system" May 16 03:42:58.383968 unknown[712]: fetched base config from "system" May 16 03:42:58.383982 unknown[712]: fetched user config from "openstack" May 16 03:42:58.385682 ignition[712]: fetch: fetch complete May 16 03:42:58.389064 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 03:42:58.385699 ignition[712]: fetch: fetch passed May 16 03:42:58.385847 ignition[712]: Ignition finished successfully May 16 03:42:58.395655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 03:42:58.443729 ignition[718]: Ignition 2.20.0 May 16 03:42:58.443758 ignition[718]: Stage: kargs May 16 03:42:58.444176 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 16 03:42:58.444203 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:42:58.446609 ignition[718]: kargs: kargs passed May 16 03:42:58.452155 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 03:42:58.446715 ignition[718]: Ignition finished successfully May 16 03:42:58.456650 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 03:42:58.500725 ignition[725]: Ignition 2.20.0 May 16 03:42:58.500754 ignition[725]: Stage: disks May 16 03:42:58.501200 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 16 03:42:58.501230 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:42:58.503702 ignition[725]: disks: disks passed May 16 03:42:58.508228 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 03:42:58.503808 ignition[725]: Ignition finished successfully May 16 03:42:58.511949 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 03:42:58.514721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 03:42:58.517782 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 03:42:58.520729 systemd[1]: Reached target sysinit.target - System Initialization. May 16 03:42:58.524070 systemd[1]: Reached target basic.target - Basic System. May 16 03:42:58.531545 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 03:42:58.579625 systemd-fsck[734]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 03:42:58.589409 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 03:42:58.594523 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 03:42:58.742355 kernel: EXT4-fs (vda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 16 03:42:58.743215 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 03:42:58.744755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 03:42:58.747438 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 03:42:58.751493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 03:42:58.752263 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 03:42:58.758561 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 16 03:42:58.760006 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 03:42:58.761193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 03:42:58.768015 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 03:42:58.774434 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 03:42:58.785292 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (742) May 16 03:42:58.791390 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 03:42:58.791480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 03:42:58.791512 kernel: BTRFS info (device vda6): using free space tree May 16 03:42:58.800434 kernel: BTRFS info (device vda6): auto enabling async discard May 16 03:42:58.803530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 03:42:59.476531 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory May 16 03:42:59.491141 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory May 16 03:42:59.502442 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory May 16 03:42:59.512999 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory May 16 03:42:59.717821 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 03:42:59.722019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 03:42:59.724691 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 03:42:59.755672 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 03:42:59.761211 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 03:42:59.794287 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 03:42:59.803935 ignition[860]: INFO : Ignition 2.20.0 May 16 03:42:59.803935 ignition[860]: INFO : Stage: mount May 16 03:42:59.803935 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 03:42:59.803935 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:42:59.810735 ignition[860]: INFO : mount: mount passed May 16 03:42:59.810735 ignition[860]: INFO : Ignition finished successfully May 16 03:42:59.806246 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 03:42:59.862959 systemd-networkd[703]: eth0: Gained IPv6LL May 16 03:43:06.536260 coreos-metadata[744]: May 16 03:43:06.536 WARN failed to locate config-drive, using the metadata service API instead May 16 03:43:06.583681 coreos-metadata[744]: May 16 03:43:06.583 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 03:43:06.600204 coreos-metadata[744]: May 16 03:43:06.600 INFO Fetch successful May 16 03:43:06.601494 coreos-metadata[744]: May 16 03:43:06.600 INFO wrote hostname ci-4284-0-0-n-184e873f92.novalocal to /sysroot/etc/hostname May 16 03:43:06.603034 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 16 03:43:06.603134 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 16 03:43:06.608423 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 03:43:06.628648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 03:43:06.649412 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (875) May 16 03:43:06.654668 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 03:43:06.654730 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 03:43:06.656462 kernel: BTRFS info (device vda6): using free space tree May 16 03:43:06.664375 kernel: BTRFS info (device vda6): auto enabling async discard May 16 03:43:06.668529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 03:43:06.695245 ignition[893]: INFO : Ignition 2.20.0 May 16 03:43:06.695245 ignition[893]: INFO : Stage: files May 16 03:43:06.695245 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 03:43:06.695245 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:43:06.701675 ignition[893]: DEBUG : files: compiled without relabeling support, skipping May 16 03:43:06.701675 ignition[893]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 03:43:06.701675 ignition[893]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 03:43:06.707254 ignition[893]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 03:43:06.707254 ignition[893]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 03:43:06.707254 ignition[893]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 03:43:06.707254 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 03:43:06.707254 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 16 03:43:06.702646 unknown[893]: wrote ssh authorized keys file for user: core May 16 03:43:06.772658 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 03:43:07.149348 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 16 03:43:07.149348 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 03:43:07.151767 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 03:43:07.163714 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 16 03:43:07.895140 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 03:43:09.691029 ignition[893]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 16 03:43:09.692866 ignition[893]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 03:43:09.694411 ignition[893]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 03:43:09.695804 ignition[893]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 03:43:09.695804 ignition[893]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 03:43:09.695804 ignition[893]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 16 03:43:09.695804 ignition[893]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 16 03:43:09.695804 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 03:43:09.695804 ignition[893]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 03:43:09.695804 ignition[893]: INFO : files: files passed May 16 03:43:09.695804 ignition[893]: INFO : Ignition finished successfully May 16 03:43:09.697245 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 03:43:09.703620 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 03:43:09.709025 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 03:43:09.719416 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 03:43:09.719539 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 03:43:09.727511 initrd-setup-root-after-ignition[923]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 03:43:09.729012 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 03:43:09.730304 initrd-setup-root-after-ignition[923]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 03:43:09.730153 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 03:43:09.731375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 03:43:09.734439 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 03:43:09.776007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 03:43:09.776801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 03:43:09.777720 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 03:43:09.778299 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 03:43:09.779667 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 03:43:09.781470 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 03:43:09.802597 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 03:43:09.804922 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 03:43:09.830419 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 03:43:09.831225 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 03:43:09.832872 systemd[1]: Stopped target timers.target - Timer Units. May 16 03:43:09.835241 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 03:43:09.835638 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 03:43:09.838113 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 03:43:09.839757 systemd[1]: Stopped target basic.target - Basic System. May 16 03:43:09.842144 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 03:43:09.844416 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 03:43:09.846581 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 03:43:09.849014 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 03:43:09.851509 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 03:43:09.853990 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 03:43:09.856403 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 03:43:09.858820 systemd[1]: Stopped target swap.target - Swaps. May 16 03:43:09.860979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 03:43:09.861309 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 03:43:09.863917 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 03:43:09.865624 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 03:43:09.867691 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 03:43:09.867958 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 03:43:09.870211 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 03:43:09.870644 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 03:43:09.873747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 03:43:09.874089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 03:43:09.876701 systemd[1]: ignition-files.service: Deactivated successfully. May 16 03:43:09.877005 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 03:43:09.882749 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 03:43:09.890603 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 03:43:09.892501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 03:43:09.892854 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 03:43:09.898982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 03:43:09.899848 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 03:43:09.906195 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 03:43:09.907057 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 03:43:09.917892 ignition[947]: INFO : Ignition 2.20.0 May 16 03:43:09.917892 ignition[947]: INFO : Stage: umount May 16 03:43:09.920437 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 03:43:09.920437 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 03:43:09.920437 ignition[947]: INFO : umount: umount passed May 16 03:43:09.920437 ignition[947]: INFO : Ignition finished successfully May 16 03:43:09.920668 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 03:43:09.920758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 03:43:09.923023 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 03:43:09.923084 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 03:43:09.923615 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 03:43:09.923662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 03:43:09.924221 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 03:43:09.924265 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 03:43:09.926510 systemd[1]: Stopped target network.target - Network. May 16 03:43:09.927028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 03:43:09.927076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 03:43:09.928404 systemd[1]: Stopped target paths.target - Path Units. May 16 03:43:09.929005 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 03:43:09.933373 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 03:43:09.934232 systemd[1]: Stopped target slices.target - Slice Units. May 16 03:43:09.935302 systemd[1]: Stopped target sockets.target - Socket Units. May 16 03:43:09.936561 systemd[1]: iscsid.socket: Deactivated successfully. May 16 03:43:09.936600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 03:43:09.937534 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 03:43:09.937565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 03:43:09.938674 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 03:43:09.938718 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 03:43:09.939887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 03:43:09.939932 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 03:43:09.940954 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 03:43:09.941966 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 03:43:09.944072 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 03:43:09.945956 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 03:43:09.946074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 03:43:09.949053 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 03:43:09.949259 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 03:43:09.950168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 03:43:09.951126 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 03:43:09.951215 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 03:43:09.953131 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 03:43:09.954367 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 03:43:09.954421 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 03:43:09.954964 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 03:43:09.955007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 03:43:09.957413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 03:43:09.961576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 03:43:09.961634 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 03:43:09.962167 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 03:43:09.962208 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 03:43:09.963162 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 03:43:09.963205 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 03:43:09.964227 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 03:43:09.964272 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 03:43:09.966458 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 03:43:09.968291 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 03:43:09.968387 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 03:43:09.977819 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 03:43:09.978528 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 03:43:09.979521 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 03:43:09.979627 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 03:43:09.980954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 03:43:09.981004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 03:43:09.982078 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 03:43:09.982110 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 03:43:09.983134 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 03:43:09.983182 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 03:43:09.984766 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 03:43:09.984807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 03:43:09.985843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 03:43:09.985883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 03:43:09.988420 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 03:43:09.989334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 03:43:09.989393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 03:43:09.992902 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 03:43:09.992947 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 03:43:09.993989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 03:43:09.994034 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 03:43:09.995183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 03:43:09.995224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:43:09.999378 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 03:43:09.999437 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 03:43:10.002528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 03:43:10.002634 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 03:43:10.003981 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 03:43:10.005815 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 03:43:10.026594 systemd[1]: Switching root. May 16 03:43:10.063495 systemd-journald[184]: Journal stopped May 16 03:43:12.009751 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 16 03:43:12.009802 kernel: SELinux: policy capability network_peer_controls=1 May 16 03:43:12.009819 kernel: SELinux: policy capability open_perms=1 May 16 03:43:12.009831 kernel: SELinux: policy capability extended_socket_class=1 May 16 03:43:12.009842 kernel: SELinux: policy capability always_check_network=0 May 16 03:43:12.009854 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 03:43:12.009868 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 03:43:12.009879 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 03:43:12.009890 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 03:43:12.009901 kernel: audit: type=1403 audit(1747366990.906:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 03:43:12.009914 systemd[1]: Successfully loaded SELinux policy in 80.095ms. May 16 03:43:12.009937 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.796ms. May 16 03:43:12.009951 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 03:43:12.009963 systemd[1]: Detected virtualization kvm. May 16 03:43:12.009977 systemd[1]: Detected architecture x86-64. May 16 03:43:12.009989 systemd[1]: Detected first boot. May 16 03:43:12.010001 systemd[1]: Hostname set to . May 16 03:43:12.010013 systemd[1]: Initializing machine ID from VM UUID. May 16 03:43:12.010024 zram_generator::config[992]: No configuration found. May 16 03:43:12.010036 kernel: Guest personality initialized and is inactive May 16 03:43:12.010051 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 03:43:12.010062 kernel: Initialized host personality May 16 03:43:12.010073 kernel: NET: Registered PF_VSOCK protocol family May 16 03:43:12.010086 systemd[1]: Populated /etc with preset unit settings. May 16 03:43:12.010099 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 03:43:12.010112 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 03:43:12.010124 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 03:43:12.010136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 03:43:12.010148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 03:43:12.010160 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 03:43:12.010172 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 03:43:12.010186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 03:43:12.010199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 03:43:12.010211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 03:43:12.010223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 03:43:12.010235 systemd[1]: Created slice user.slice - User and Session Slice. May 16 03:43:12.010247 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 03:43:12.010259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 03:43:12.010272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 03:43:12.010283 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 03:43:12.010298 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 03:43:12.010310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 03:43:12.012172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 03:43:12.012187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 03:43:12.012200 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 03:43:12.012214 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 03:43:12.012231 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 03:43:12.012244 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 03:43:12.012258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 03:43:12.012271 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 03:43:12.012284 systemd[1]: Reached target slices.target - Slice Units. May 16 03:43:12.012297 systemd[1]: Reached target swap.target - Swaps. May 16 03:43:12.012310 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 03:43:12.012341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 03:43:12.012355 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 03:43:12.012371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 03:43:12.012384 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 03:43:12.012397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 03:43:12.012410 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 03:43:12.012423 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 03:43:12.012436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 03:43:12.012448 systemd[1]: Mounting media.mount - External Media Directory... May 16 03:43:12.012465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:12.012478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 03:43:12.012493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 03:43:12.012505 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 03:43:12.012519 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 03:43:12.012532 systemd[1]: Reached target machines.target - Containers. May 16 03:43:12.012544 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 03:43:12.012557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 03:43:12.012570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 03:43:12.012583 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 03:43:12.012596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 03:43:12.012611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 03:43:12.012624 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 03:43:12.012637 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 03:43:12.012651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 03:43:12.012664 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 03:43:12.012677 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 03:43:12.012693 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 03:43:12.012706 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 03:43:12.012721 systemd[1]: Stopped systemd-fsck-usr.service. May 16 03:43:12.012735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 03:43:12.012748 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 03:43:12.012761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 03:43:12.012773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 03:43:12.012786 kernel: loop: module loaded May 16 03:43:12.012798 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 03:43:12.012812 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 03:43:12.012825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 03:43:12.012841 systemd[1]: verity-setup.service: Deactivated successfully. May 16 03:43:12.012854 systemd[1]: Stopped verity-setup.service. May 16 03:43:12.012867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:12.012883 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 03:43:12.012897 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 03:43:12.012909 systemd[1]: Mounted media.mount - External Media Directory. May 16 03:43:12.012922 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 03:43:12.012935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 03:43:12.012948 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 03:43:12.012961 kernel: fuse: init (API version 7.39) May 16 03:43:12.012976 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 03:43:12.013008 systemd-journald[1086]: Collecting audit messages is disabled. May 16 03:43:12.013043 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 03:43:12.013057 systemd-journald[1086]: Journal started May 16 03:43:12.013083 systemd-journald[1086]: Runtime Journal (/run/log/journal/2e75342014d843fb87d41623e9e51199) is 8M, max 78.2M, 70.2M free. May 16 03:43:11.674944 systemd[1]: Queued start job for default target multi-user.target. May 16 03:43:11.681415 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 03:43:11.681856 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 03:43:12.020688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 03:43:12.020721 systemd[1]: Started systemd-journald.service - Journal Service. May 16 03:43:12.019290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 03:43:12.019521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 03:43:12.020287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 03:43:12.021878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 03:43:12.022819 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 03:43:12.024126 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 03:43:12.024275 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 03:43:12.025013 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 03:43:12.025158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 03:43:12.026438 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 03:43:12.029042 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 03:43:12.029893 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 03:43:12.036174 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 03:43:12.049915 kernel: ACPI: bus type drm_connector registered May 16 03:43:12.048244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 03:43:12.050428 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 03:43:12.050963 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 03:43:12.051002 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 03:43:12.056561 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 03:43:12.059581 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 03:43:12.065255 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 03:43:12.066089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 03:43:12.072426 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 03:43:12.074899 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 03:43:12.075669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 03:43:12.077833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 03:43:12.078767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 03:43:12.079868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 03:43:12.090955 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 03:43:12.099356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 03:43:12.102762 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 03:43:12.103465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 03:43:12.104721 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 03:43:12.112442 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 03:43:12.112958 systemd-journald[1086]: Time spent on flushing to /var/log/journal/2e75342014d843fb87d41623e9e51199 is 100.448ms for 960 entries. May 16 03:43:12.112958 systemd-journald[1086]: System Journal (/var/log/journal/2e75342014d843fb87d41623e9e51199) is 8M, max 584.8M, 576.8M free. May 16 03:43:12.248566 systemd-journald[1086]: Received client request to flush runtime journal. May 16 03:43:12.248624 kernel: loop0: detected capacity change from 0 to 8 May 16 03:43:12.248679 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 03:43:12.248700 kernel: loop1: detected capacity change from 0 to 109808 May 16 03:43:12.116959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 03:43:12.119560 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 03:43:12.120917 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 03:43:12.128445 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 03:43:12.135439 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 03:43:12.178509 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 03:43:12.180257 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 03:43:12.195801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 03:43:12.210273 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 03:43:12.239882 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. May 16 03:43:12.239897 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. May 16 03:43:12.245474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 03:43:12.247516 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 03:43:12.252539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 03:43:12.262988 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 03:43:12.292405 kernel: loop2: detected capacity change from 0 to 229808 May 16 03:43:12.341966 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 03:43:12.346210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 03:43:12.352435 kernel: loop3: detected capacity change from 0 to 151640 May 16 03:43:12.375332 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 16 03:43:12.375364 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 16 03:43:12.379257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 03:43:12.422354 kernel: loop4: detected capacity change from 0 to 8 May 16 03:43:12.425425 kernel: loop5: detected capacity change from 0 to 109808 May 16 03:43:12.469352 kernel: loop6: detected capacity change from 0 to 229808 May 16 03:43:12.532392 kernel: loop7: detected capacity change from 0 to 151640 May 16 03:43:12.584618 (sd-merge)[1162]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 16 03:43:12.585082 (sd-merge)[1162]: Merged extensions into '/usr'. May 16 03:43:12.590452 systemd[1]: Reload requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... May 16 03:43:12.590469 systemd[1]: Reloading... May 16 03:43:12.678600 zram_generator::config[1187]: No configuration found. May 16 03:43:12.884530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 03:43:12.967261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 03:43:12.967590 systemd[1]: Reloading finished in 376 ms. May 16 03:43:12.989837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 03:43:13.004881 systemd[1]: Starting ensure-sysext.service... May 16 03:43:13.008792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 03:43:13.044937 systemd[1]: Reload requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... May 16 03:43:13.044956 systemd[1]: Reloading... May 16 03:43:13.066009 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 03:43:13.066256 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 03:43:13.067056 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 03:43:13.068071 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 16 03:43:13.068240 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 16 03:43:13.078188 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 16 03:43:13.078391 systemd-tmpfiles[1246]: Skipping /boot May 16 03:43:13.088913 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 16 03:43:13.089051 systemd-tmpfiles[1246]: Skipping /boot May 16 03:43:13.128809 zram_generator::config[1274]: No configuration found. May 16 03:43:13.207453 ldconfig[1125]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 03:43:13.294350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 03:43:13.376214 systemd[1]: Reloading finished in 330 ms. May 16 03:43:13.388823 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 03:43:13.389796 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 03:43:13.390660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 03:43:13.406440 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 03:43:13.409636 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 03:43:13.412607 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 03:43:13.420249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 03:43:13.426894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 03:43:13.437173 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 03:43:13.460065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.462606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 03:43:13.465935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 03:43:13.471184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 03:43:13.478390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 03:43:13.479052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 03:43:13.479432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 03:43:13.479577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.481929 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 03:43:13.488044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 03:43:13.488784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 03:43:13.490822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 03:43:13.490992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 03:43:13.500850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.501061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 03:43:13.502248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 03:43:13.504569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 03:43:13.505273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 03:43:13.507429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 03:43:13.508729 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 03:43:13.512273 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 03:43:13.512855 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.515379 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 03:43:13.516503 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 03:43:13.516676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 03:43:13.531788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.532059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 03:43:13.535969 systemd-udevd[1339]: Using default interface naming scheme 'v255'. May 16 03:43:13.537628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 03:43:13.543524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 03:43:13.544583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 03:43:13.544630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 03:43:13.544727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 03:43:13.546510 systemd[1]: Finished ensure-sysext.service. May 16 03:43:13.547326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 03:43:13.547946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 03:43:13.548834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 03:43:13.548991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 03:43:13.559826 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 03:43:13.563918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 03:43:13.568654 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 03:43:13.570392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 03:43:13.574446 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 03:43:13.578005 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 03:43:13.579374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 03:43:13.589774 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 03:43:13.590004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 03:43:13.590934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 03:43:13.601748 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 03:43:13.607623 augenrules[1387]: No rules May 16 03:43:13.609086 systemd[1]: audit-rules.service: Deactivated successfully. May 16 03:43:13.609938 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 03:43:13.616430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 03:43:13.621128 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 03:43:13.721537 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 03:43:13.722211 systemd[1]: Reached target time-set.target - System Time Set. May 16 03:43:13.746239 systemd-networkd[1398]: lo: Link UP May 16 03:43:13.746546 systemd-networkd[1398]: lo: Gained carrier May 16 03:43:13.748407 systemd-networkd[1398]: Enumeration completed May 16 03:43:13.748512 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 03:43:13.752178 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 03:43:13.753647 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 03:43:13.756398 systemd-resolved[1338]: Positive Trust Anchors: May 16 03:43:13.756417 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 03:43:13.756462 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 03:43:13.765064 systemd-resolved[1338]: Using system hostname 'ci-4284-0-0-n-184e873f92.novalocal'. May 16 03:43:13.771637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 03:43:13.772466 systemd[1]: Reached target network.target - Network. May 16 03:43:13.773418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 03:43:13.783834 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 03:43:13.796446 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 03:43:13.805348 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1395) May 16 03:43:13.878404 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 16 03:43:13.881363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 03:43:13.884088 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 03:43:13.884171 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 03:43:13.885263 systemd-networkd[1398]: eth0: Link UP May 16 03:43:13.885379 systemd-networkd[1398]: eth0: Gained carrier May 16 03:43:13.885451 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 03:43:13.892348 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 03:43:13.897590 systemd-networkd[1398]: eth0: DHCPv4 address 172.24.4.212/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 03:43:13.898277 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 16 03:43:13.898523 kernel: ACPI: button: Power Button [PWRF] May 16 03:43:13.911817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 03:43:13.915434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 03:43:13.943716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:43:13.951997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 03:43:13.969368 kernel: mousedev: PS/2 mouse device common for all mice May 16 03:43:13.981352 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 16 03:43:13.983409 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 16 03:43:13.987116 kernel: Console: switching to colour dummy device 80x25 May 16 03:43:13.989245 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 16 03:43:13.989293 kernel: [drm] features: -context_init May 16 03:43:13.991895 kernel: [drm] number of scanouts: 1 May 16 03:43:13.991934 kernel: [drm] number of cap sets: 0 May 16 03:43:13.995050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 03:43:13.996345 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 16 03:43:13.996402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:43:13.998097 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 03:43:14.006344 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 16 03:43:14.006428 kernel: Console: switching to colour frame buffer device 160x50 May 16 03:43:14.000297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:43:14.016350 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 16 03:43:14.024905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 03:43:14.025107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:43:14.026363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 03:43:14.029210 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 03:43:14.049452 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 03:43:14.124593 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 03:43:14.180009 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 03:43:14.181969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 03:43:14.185691 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 03:43:14.192795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 03:43:14.195109 systemd[1]: Reached target sysinit.target - System Initialization. May 16 03:43:14.195606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 03:43:14.195844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 03:43:14.196773 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 03:43:14.197937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 03:43:14.198130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 03:43:14.198274 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 03:43:14.198771 systemd[1]: Reached target paths.target - Path Units. May 16 03:43:14.198925 systemd[1]: Reached target timers.target - Timer Units. May 16 03:43:14.202289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 03:43:14.208932 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 03:43:14.216369 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 03:43:14.218407 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 03:43:14.220596 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 03:43:14.220807 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 03:43:14.236780 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 03:43:14.239274 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 03:43:14.246731 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 03:43:14.248786 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 03:43:14.251108 systemd[1]: Reached target sockets.target - Socket Units. May 16 03:43:14.253187 systemd[1]: Reached target basic.target - Basic System. May 16 03:43:14.254733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 03:43:14.254763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 03:43:14.258398 systemd[1]: Starting containerd.service - containerd container runtime... May 16 03:43:14.263396 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 03:43:14.270898 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 03:43:14.286422 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 03:43:14.291527 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 03:43:14.292195 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 03:43:14.297358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 03:43:14.302021 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 03:43:14.309477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 03:43:14.320017 jq[1456]: false May 16 03:43:14.321267 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 03:43:14.324797 dbus-daemon[1455]: [system] SELinux support is enabled May 16 03:43:14.333049 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 03:43:14.336962 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 03:43:14.337866 extend-filesystems[1458]: Found loop4 May 16 03:43:14.339490 extend-filesystems[1458]: Found loop5 May 16 03:43:14.339490 extend-filesystems[1458]: Found loop6 May 16 03:43:14.339490 extend-filesystems[1458]: Found loop7 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda May 16 03:43:14.339490 extend-filesystems[1458]: Found vda1 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda2 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda3 May 16 03:43:14.339490 extend-filesystems[1458]: Found usr May 16 03:43:14.339490 extend-filesystems[1458]: Found vda4 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda6 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda7 May 16 03:43:14.339490 extend-filesystems[1458]: Found vda9 May 16 03:43:14.339490 extend-filesystems[1458]: Checking size of /dev/vda9 May 16 03:43:14.340867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 03:43:14.342468 systemd[1]: Starting update-engine.service - Update Engine... May 16 03:43:14.350408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 03:43:14.364781 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 03:43:14.377074 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 03:43:14.377287 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 03:43:14.378679 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 03:43:14.378846 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 03:43:14.399373 extend-filesystems[1458]: Resized partition /dev/vda9 May 16 03:43:14.409031 extend-filesystems[1481]: resize2fs 1.47.2 (1-Jan-2025) May 16 03:43:14.409849 update_engine[1467]: I20250516 03:43:14.407449 1467 main.cc:92] Flatcar Update Engine starting May 16 03:43:14.418897 update_engine[1467]: I20250516 03:43:14.417366 1467 update_check_scheduler.cc:74] Next update check in 5m20s May 16 03:43:14.423724 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 16 03:43:14.425149 systemd[1]: motdgen.service: Deactivated successfully. May 16 03:43:14.425803 jq[1468]: true May 16 03:43:14.426469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 03:43:14.430332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1403) May 16 03:43:14.433338 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 16 03:43:14.440496 systemd[1]: Started update-engine.service - Update Engine. May 16 03:43:14.446036 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 03:43:14.452671 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 03:43:14.452702 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 03:43:14.473569 jq[1489]: true May 16 03:43:14.453253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 03:43:14.453269 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 03:43:14.461000 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 03:43:14.465479 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 03:43:14.504080 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 03:43:14.504080 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 03:43:14.504080 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 16 03:43:14.496573 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 03:43:14.518488 extend-filesystems[1458]: Resized filesystem in /dev/vda9 May 16 03:43:14.496783 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 03:43:14.525055 tar[1476]: linux-amd64/LICENSE May 16 03:43:14.525055 tar[1476]: linux-amd64/helm May 16 03:43:14.595511 systemd-logind[1465]: New seat seat0. May 16 03:43:14.615438 systemd-logind[1465]: Watching system buttons on /dev/input/event2 (Power Button) May 16 03:43:14.615480 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 03:43:14.615951 systemd[1]: Started systemd-logind.service - User Login Management. May 16 03:43:14.694575 bash[1514]: Updated "/home/core/.ssh/authorized_keys" May 16 03:43:14.696001 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 03:43:14.696140 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 03:43:14.716712 systemd[1]: Starting sshkeys.service... May 16 03:43:14.744296 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 03:43:14.752881 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 03:43:14.968277 containerd[1487]: time="2025-05-16T03:43:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 03:43:14.970338 containerd[1487]: time="2025-05-16T03:43:14.970204113Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000333771Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=6.021424ms May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000372794Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000395466Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000585262Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000606502Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000637170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000714014Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000730284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.000981576Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.001000832Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.001013976Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 03:43:15.001841 containerd[1487]: time="2025-05-16T03:43:15.001024877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001103023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001305273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001357250Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001371056Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001416411Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001647605Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 03:43:15.002565 containerd[1487]: time="2025-05-16T03:43:15.001707708Z" level=info msg="metadata content store policy set" policy=shared May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017667380Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017727643Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017744384Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017759132Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017772267Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017784029Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017797053Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017813143Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017825597Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017839092Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017851034Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017864199Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.017973464Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 03:43:15.019405 containerd[1487]: time="2025-05-16T03:43:15.018003621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018018989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018032515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018048445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018061569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018076738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018092968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018109720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018122624Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018134105Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018192074Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018208956Z" level=info msg="Start snapshots syncer" May 16 03:43:15.019774 containerd[1487]: time="2025-05-16T03:43:15.018237229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 03:43:15.020037 containerd[1487]: time="2025-05-16T03:43:15.018506694Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 03:43:15.020037 containerd[1487]: time="2025-05-16T03:43:15.018563390Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018632931Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018729492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018753036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018765790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018780498Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018795035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018814301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018828538Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018852092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018866870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018879333Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018927553Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018947090Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 03:43:15.020537 containerd[1487]: time="2025-05-16T03:43:15.018958181Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.018970133Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.018980172Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.018993096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019006401Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019024756Z" level=info msg="runtime interface created" May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019031318Z" level=info msg="created NRI interface" May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019040916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019053129Z" level=info msg="Connect containerd service" May 16 03:43:15.020834 containerd[1487]: time="2025-05-16T03:43:15.019079358Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 03:43:15.033221 containerd[1487]: time="2025-05-16T03:43:15.033177930Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 03:43:15.064715 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 03:43:15.089696 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 03:43:15.093779 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 03:43:15.099719 systemd[1]: Started sshd@0-172.24.4.212:22-172.24.4.1:40574.service - OpenSSH per-connection server daemon (172.24.4.1:40574). May 16 03:43:15.123673 systemd[1]: issuegen.service: Deactivated successfully. May 16 03:43:15.124653 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 03:43:15.135743 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 03:43:15.175560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 03:43:15.185127 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 03:43:15.193709 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 03:43:15.194566 systemd[1]: Reached target getty.target - Login Prompts. May 16 03:43:15.229378 tar[1476]: linux-amd64/README.md May 16 03:43:15.252754 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.300987236Z" level=info msg="Start subscribing containerd event" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301097844Z" level=info msg="Start recovering state" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301170249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301237115Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301280095Z" level=info msg="Start event monitor" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301310452Z" level=info msg="Start cni network conf syncer for default" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301359204Z" level=info msg="Start streaming server" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301387577Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301402465Z" level=info msg="runtime interface starting up..." May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301413496Z" level=info msg="starting plugins..." May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301456897Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 03:43:15.301981 containerd[1487]: time="2025-05-16T03:43:15.301720742Z" level=info msg="containerd successfully booted in 0.333836s" May 16 03:43:15.302745 systemd[1]: Started containerd.service - containerd container runtime. May 16 03:43:15.798587 systemd-networkd[1398]: eth0: Gained IPv6LL May 16 03:43:15.800273 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 16 03:43:15.802223 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 03:43:15.809130 systemd[1]: Reached target network-online.target - Network is Online. May 16 03:43:15.820894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:43:15.830958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 03:43:15.886139 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 03:43:16.116892 sshd[1543]: Accepted publickey for core from 172.24.4.1 port 40574 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:16.119800 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:16.153247 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 03:43:16.160161 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 03:43:16.168491 systemd-logind[1465]: New session 1 of user core. May 16 03:43:16.198905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 03:43:16.206637 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 03:43:16.220886 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 03:43:16.226256 systemd-logind[1465]: New session c1 of user core. May 16 03:43:16.390836 systemd[1580]: Queued start job for default target default.target. May 16 03:43:16.401181 systemd[1580]: Created slice app.slice - User Application Slice. May 16 03:43:16.401206 systemd[1580]: Reached target paths.target - Paths. May 16 03:43:16.401475 systemd[1580]: Reached target timers.target - Timers. May 16 03:43:16.405400 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 03:43:16.414478 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 03:43:16.415453 systemd[1580]: Reached target sockets.target - Sockets. May 16 03:43:16.415656 systemd[1580]: Reached target basic.target - Basic System. May 16 03:43:16.415807 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 03:43:16.415913 systemd[1580]: Reached target default.target - Main User Target. May 16 03:43:16.416012 systemd[1580]: Startup finished in 178ms. May 16 03:43:16.423560 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 03:43:16.824284 systemd[1]: Started sshd@1-172.24.4.212:22-172.24.4.1:40582.service - OpenSSH per-connection server daemon (172.24.4.1:40582). May 16 03:43:18.228812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:43:18.241210 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 03:43:18.836818 sshd[1591]: Accepted publickey for core from 172.24.4.1 port 40582 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:18.838196 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:18.849739 systemd-logind[1465]: New session 2 of user core. May 16 03:43:18.860769 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 03:43:19.364926 sshd[1605]: Connection closed by 172.24.4.1 port 40582 May 16 03:43:19.364702 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 16 03:43:19.379444 systemd[1]: sshd@1-172.24.4.212:22-172.24.4.1:40582.service: Deactivated successfully. May 16 03:43:19.383260 systemd[1]: session-2.scope: Deactivated successfully. May 16 03:43:19.386962 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. May 16 03:43:19.389357 systemd[1]: Started sshd@2-172.24.4.212:22-172.24.4.1:40586.service - OpenSSH per-connection server daemon (172.24.4.1:40586). May 16 03:43:19.396837 systemd-logind[1465]: Removed session 2. May 16 03:43:19.553021 kubelet[1600]: E0516 03:43:19.552923 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 03:43:19.556884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 03:43:19.557215 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 03:43:19.557957 systemd[1]: kubelet.service: Consumed 2.230s CPU time, 270.7M memory peak. May 16 03:43:20.276547 login[1552]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 03:43:20.281178 login[1554]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 03:43:20.288977 systemd-logind[1465]: New session 3 of user core. May 16 03:43:20.299823 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 03:43:20.307365 systemd-logind[1465]: New session 4 of user core. May 16 03:43:20.314210 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 03:43:20.789741 sshd[1611]: Accepted publickey for core from 172.24.4.1 port 40586 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:20.792421 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:20.802783 systemd-logind[1465]: New session 5 of user core. May 16 03:43:20.816079 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 03:43:21.378685 coreos-metadata[1454]: May 16 03:43:21.378 WARN failed to locate config-drive, using the metadata service API instead May 16 03:43:21.461152 coreos-metadata[1454]: May 16 03:43:21.461 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 16 03:43:21.466363 sshd[1641]: Connection closed by 172.24.4.1 port 40586 May 16 03:43:21.465233 sshd-session[1611]: pam_unix(sshd:session): session closed for user core May 16 03:43:21.472217 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. May 16 03:43:21.473856 systemd[1]: sshd@2-172.24.4.212:22-172.24.4.1:40586.service: Deactivated successfully. May 16 03:43:21.477536 systemd[1]: session-5.scope: Deactivated successfully. May 16 03:43:21.479998 systemd-logind[1465]: Removed session 5. May 16 03:43:21.653728 coreos-metadata[1454]: May 16 03:43:21.653 INFO Fetch successful May 16 03:43:21.653728 coreos-metadata[1454]: May 16 03:43:21.653 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 03:43:21.669689 coreos-metadata[1454]: May 16 03:43:21.669 INFO Fetch successful May 16 03:43:21.669832 coreos-metadata[1454]: May 16 03:43:21.669 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 16 03:43:21.683948 coreos-metadata[1454]: May 16 03:43:21.683 INFO Fetch successful May 16 03:43:21.683948 coreos-metadata[1454]: May 16 03:43:21.683 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 16 03:43:21.697396 coreos-metadata[1454]: May 16 03:43:21.697 INFO Fetch successful May 16 03:43:21.697551 coreos-metadata[1454]: May 16 03:43:21.697 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 16 03:43:21.713148 coreos-metadata[1454]: May 16 03:43:21.713 INFO Fetch successful May 16 03:43:21.713148 coreos-metadata[1454]: May 16 03:43:21.713 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 16 03:43:21.726573 coreos-metadata[1454]: May 16 03:43:21.726 INFO Fetch successful May 16 03:43:21.773838 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 03:43:21.775197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 03:43:21.863839 coreos-metadata[1526]: May 16 03:43:21.863 WARN failed to locate config-drive, using the metadata service API instead May 16 03:43:21.906983 coreos-metadata[1526]: May 16 03:43:21.906 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 16 03:43:21.923247 coreos-metadata[1526]: May 16 03:43:21.923 INFO Fetch successful May 16 03:43:21.923247 coreos-metadata[1526]: May 16 03:43:21.923 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 16 03:43:21.936550 coreos-metadata[1526]: May 16 03:43:21.936 INFO Fetch successful May 16 03:43:21.943224 unknown[1526]: wrote ssh authorized keys file for user: core May 16 03:43:21.989494 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" May 16 03:43:21.990238 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 03:43:21.993263 systemd[1]: Finished sshkeys.service. May 16 03:43:21.998626 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 03:43:21.998947 systemd[1]: Startup finished in 1.281s (kernel) + 16.076s (initrd) + 11.169s (userspace) = 28.527s. May 16 03:43:29.725054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 03:43:29.728147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:43:30.115812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:43:30.135173 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 03:43:30.233564 kubelet[1667]: E0516 03:43:30.233457 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 03:43:30.238508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 03:43:30.238748 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 03:43:30.239093 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110M memory peak. May 16 03:43:31.490121 systemd[1]: Started sshd@3-172.24.4.212:22-172.24.4.1:34142.service - OpenSSH per-connection server daemon (172.24.4.1:34142). May 16 03:43:32.624451 sshd[1675]: Accepted publickey for core from 172.24.4.1 port 34142 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:32.627630 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:32.642708 systemd-logind[1465]: New session 6 of user core. May 16 03:43:32.654643 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 03:43:33.266363 sshd[1677]: Connection closed by 172.24.4.1 port 34142 May 16 03:43:33.267263 sshd-session[1675]: pam_unix(sshd:session): session closed for user core May 16 03:43:33.285177 systemd[1]: sshd@3-172.24.4.212:22-172.24.4.1:34142.service: Deactivated successfully. May 16 03:43:33.289191 systemd[1]: session-6.scope: Deactivated successfully. May 16 03:43:33.291170 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. May 16 03:43:33.295694 systemd[1]: Started sshd@4-172.24.4.212:22-172.24.4.1:50234.service - OpenSSH per-connection server daemon (172.24.4.1:50234). May 16 03:43:33.299038 systemd-logind[1465]: Removed session 6. May 16 03:43:34.807208 sshd[1682]: Accepted publickey for core from 172.24.4.1 port 50234 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:34.810483 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:34.825304 systemd-logind[1465]: New session 7 of user core. May 16 03:43:34.835645 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 03:43:35.364386 sshd[1685]: Connection closed by 172.24.4.1 port 50234 May 16 03:43:35.363660 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 16 03:43:35.387536 systemd[1]: sshd@4-172.24.4.212:22-172.24.4.1:50234.service: Deactivated successfully. May 16 03:43:35.391276 systemd[1]: session-7.scope: Deactivated successfully. May 16 03:43:35.394258 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. May 16 03:43:35.397412 systemd[1]: Started sshd@5-172.24.4.212:22-172.24.4.1:50248.service - OpenSSH per-connection server daemon (172.24.4.1:50248). May 16 03:43:35.400185 systemd-logind[1465]: Removed session 7. May 16 03:43:36.705196 sshd[1690]: Accepted publickey for core from 172.24.4.1 port 50248 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:36.707867 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:36.719807 systemd-logind[1465]: New session 8 of user core. May 16 03:43:36.732639 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 03:43:37.420365 sshd[1693]: Connection closed by 172.24.4.1 port 50248 May 16 03:43:37.419425 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 16 03:43:37.448609 systemd[1]: sshd@5-172.24.4.212:22-172.24.4.1:50248.service: Deactivated successfully. May 16 03:43:37.452880 systemd[1]: session-8.scope: Deactivated successfully. May 16 03:43:37.456916 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. May 16 03:43:37.461276 systemd[1]: Started sshd@6-172.24.4.212:22-172.24.4.1:50250.service - OpenSSH per-connection server daemon (172.24.4.1:50250). May 16 03:43:37.465286 systemd-logind[1465]: Removed session 8. May 16 03:43:39.146564 sshd[1698]: Accepted publickey for core from 172.24.4.1 port 50250 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:39.149220 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:39.161427 systemd-logind[1465]: New session 9 of user core. May 16 03:43:39.167654 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 03:43:39.649194 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 03:43:39.649965 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 03:43:39.671242 sudo[1702]: pam_unix(sudo:session): session closed for user root May 16 03:43:39.951982 sshd[1701]: Connection closed by 172.24.4.1 port 50250 May 16 03:43:39.949192 sshd-session[1698]: pam_unix(sshd:session): session closed for user core May 16 03:43:39.966266 systemd[1]: sshd@6-172.24.4.212:22-172.24.4.1:50250.service: Deactivated successfully. May 16 03:43:39.969527 systemd[1]: session-9.scope: Deactivated successfully. May 16 03:43:39.972757 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. May 16 03:43:39.976004 systemd[1]: Started sshd@7-172.24.4.212:22-172.24.4.1:50264.service - OpenSSH per-connection server daemon (172.24.4.1:50264). May 16 03:43:39.978875 systemd-logind[1465]: Removed session 9. May 16 03:43:40.475094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 03:43:40.480043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:43:40.997222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:43:41.018216 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 03:43:41.325393 kubelet[1718]: E0516 03:43:41.325211 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 03:43:41.328199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 03:43:41.328545 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 03:43:41.329278 systemd[1]: kubelet.service: Consumed 532ms CPU time, 108.4M memory peak. May 16 03:43:41.378857 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 50264 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:41.380792 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:41.387698 systemd-logind[1465]: New session 10 of user core. May 16 03:43:41.398595 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 03:43:41.836588 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 03:43:41.837215 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 03:43:41.844028 sudo[1727]: pam_unix(sudo:session): session closed for user root May 16 03:43:41.855038 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 03:43:41.855741 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 03:43:41.876437 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 03:43:41.949114 augenrules[1749]: No rules May 16 03:43:41.951441 systemd[1]: audit-rules.service: Deactivated successfully. May 16 03:43:41.951958 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 03:43:41.954519 sudo[1726]: pam_unix(sudo:session): session closed for user root May 16 03:43:42.098429 sshd[1725]: Connection closed by 172.24.4.1 port 50264 May 16 03:43:42.098078 sshd-session[1707]: pam_unix(sshd:session): session closed for user core May 16 03:43:42.116421 systemd[1]: sshd@7-172.24.4.212:22-172.24.4.1:50264.service: Deactivated successfully. May 16 03:43:42.119423 systemd[1]: session-10.scope: Deactivated successfully. May 16 03:43:42.121048 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. May 16 03:43:42.124716 systemd[1]: Started sshd@8-172.24.4.212:22-172.24.4.1:50270.service - OpenSSH per-connection server daemon (172.24.4.1:50270). May 16 03:43:42.126956 systemd-logind[1465]: Removed session 10. May 16 03:43:43.492052 sshd[1757]: Accepted publickey for core from 172.24.4.1 port 50270 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:43:43.494410 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:43:43.505001 systemd-logind[1465]: New session 11 of user core. May 16 03:43:43.511624 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 03:43:43.955475 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 03:43:43.956226 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 03:43:44.829638 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 03:43:44.839840 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 03:43:45.483189 dockerd[1781]: time="2025-05-16T03:43:45.482571650Z" level=info msg="Starting up" May 16 03:43:45.489259 dockerd[1781]: time="2025-05-16T03:43:45.489108570Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 03:43:45.581068 dockerd[1781]: time="2025-05-16T03:43:45.580871111Z" level=info msg="Loading containers: start." May 16 03:43:45.810379 kernel: Initializing XFRM netlink socket May 16 03:43:45.813917 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 16 03:43:46.002720 systemd-networkd[1398]: docker0: Link UP May 16 03:43:46.780830 systemd-resolved[1338]: Clock change detected. Flushing caches. May 16 03:43:46.781007 systemd-timesyncd[1373]: Contacted time server 66.118.229.14:123 (2.flatcar.pool.ntp.org). May 16 03:43:46.781113 systemd-timesyncd[1373]: Initial clock synchronization to Fri 2025-05-16 03:43:46.780569 UTC. May 16 03:43:46.915992 dockerd[1781]: time="2025-05-16T03:43:46.915883229Z" level=info msg="Loading containers: done." May 16 03:43:46.946391 dockerd[1781]: time="2025-05-16T03:43:46.945452837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 03:43:46.946391 dockerd[1781]: time="2025-05-16T03:43:46.945800859Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 16 03:43:46.946391 dockerd[1781]: time="2025-05-16T03:43:46.946043444Z" level=info msg="Daemon has completed initialization" May 16 03:43:47.023071 dockerd[1781]: time="2025-05-16T03:43:47.022964590Z" level=info msg="API listen on /run/docker.sock" May 16 03:43:47.023772 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 03:43:48.549882 containerd[1487]: time="2025-05-16T03:43:48.549775984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 03:43:49.461280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171964904.mount: Deactivated successfully. May 16 03:43:51.774508 containerd[1487]: time="2025-05-16T03:43:51.774177589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:51.777567 containerd[1487]: time="2025-05-16T03:43:51.777318550Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075411" May 16 03:43:51.779606 containerd[1487]: time="2025-05-16T03:43:51.779276051Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:51.783852 containerd[1487]: time="2025-05-16T03:43:51.783732178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:51.787834 containerd[1487]: time="2025-05-16T03:43:51.787505145Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 3.237648419s" May 16 03:43:51.787834 containerd[1487]: time="2025-05-16T03:43:51.787618457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 16 03:43:51.789007 containerd[1487]: time="2025-05-16T03:43:51.788949624Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 03:43:52.167267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 03:43:52.173714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:43:52.409076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:43:52.421674 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 03:43:52.773398 kubelet[2042]: E0516 03:43:52.770789 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 03:43:52.773881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 03:43:52.774045 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 03:43:52.774830 systemd[1]: kubelet.service: Consumed 287ms CPU time, 110M memory peak. May 16 03:43:54.213980 containerd[1487]: time="2025-05-16T03:43:54.213779221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:54.217333 containerd[1487]: time="2025-05-16T03:43:54.216658241Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011398" May 16 03:43:54.219216 containerd[1487]: time="2025-05-16T03:43:54.219075204Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:54.224975 containerd[1487]: time="2025-05-16T03:43:54.224858450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:54.228076 containerd[1487]: time="2025-05-16T03:43:54.227743230Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 2.438718877s" May 16 03:43:54.228076 containerd[1487]: time="2025-05-16T03:43:54.227836065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 16 03:43:54.229981 containerd[1487]: time="2025-05-16T03:43:54.229410958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 03:43:56.324790 containerd[1487]: time="2025-05-16T03:43:56.324595927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:56.335437 containerd[1487]: time="2025-05-16T03:43:56.334974672Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148968" May 16 03:43:56.369741 containerd[1487]: time="2025-05-16T03:43:56.369610542Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:56.388977 containerd[1487]: time="2025-05-16T03:43:56.388777239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:56.391627 containerd[1487]: time="2025-05-16T03:43:56.391295662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 2.161788745s" May 16 03:43:56.391627 containerd[1487]: time="2025-05-16T03:43:56.391433711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 16 03:43:56.393375 containerd[1487]: time="2025-05-16T03:43:56.392945116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 03:43:58.704980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380255148.mount: Deactivated successfully. May 16 03:43:59.682795 containerd[1487]: time="2025-05-16T03:43:59.682650458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:59.685617 containerd[1487]: time="2025-05-16T03:43:59.685500342Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889083" May 16 03:43:59.687590 containerd[1487]: time="2025-05-16T03:43:59.687479795Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:59.694266 containerd[1487]: time="2025-05-16T03:43:59.694173739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 3.301160606s" May 16 03:43:59.694266 containerd[1487]: time="2025-05-16T03:43:59.694249962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 16 03:43:59.694585 containerd[1487]: time="2025-05-16T03:43:59.694431413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:43:59.697190 containerd[1487]: time="2025-05-16T03:43:59.696671334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 03:44:00.165500 update_engine[1467]: I20250516 03:44:00.164609 1467 update_attempter.cc:509] Updating boot flags... May 16 03:44:00.274464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2072) May 16 03:44:00.344879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860897086.mount: Deactivated successfully. May 16 03:44:00.377368 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2070) May 16 03:44:01.843907 containerd[1487]: time="2025-05-16T03:44:01.843802197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:01.845382 containerd[1487]: time="2025-05-16T03:44:01.845077779Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" May 16 03:44:01.846580 containerd[1487]: time="2025-05-16T03:44:01.846529182Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:01.849801 containerd[1487]: time="2025-05-16T03:44:01.849750223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:01.851373 containerd[1487]: time="2025-05-16T03:44:01.851002822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.154235097s" May 16 03:44:01.851373 containerd[1487]: time="2025-05-16T03:44:01.851050241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 16 03:44:01.851702 containerd[1487]: time="2025-05-16T03:44:01.851674752Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 03:44:02.398695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473480707.mount: Deactivated successfully. May 16 03:44:02.410380 containerd[1487]: time="2025-05-16T03:44:02.410176121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 03:44:02.412688 containerd[1487]: time="2025-05-16T03:44:02.412104448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 16 03:44:02.416400 containerd[1487]: time="2025-05-16T03:44:02.414402037Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 03:44:02.420252 containerd[1487]: time="2025-05-16T03:44:02.420178851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 03:44:02.423723 containerd[1487]: time="2025-05-16T03:44:02.423639151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.433874ms" May 16 03:44:02.423723 containerd[1487]: time="2025-05-16T03:44:02.423712729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 03:44:02.425203 containerd[1487]: time="2025-05-16T03:44:02.425159973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 03:44:02.915307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 03:44:02.924838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:44:03.148573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:03.167945 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 03:44:03.313547 kubelet[2142]: E0516 03:44:03.313045 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 03:44:03.325107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 03:44:03.326582 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 03:44:03.327847 systemd[1]: kubelet.service: Consumed 288ms CPU time, 110.4M memory peak. May 16 03:44:05.843734 containerd[1487]: time="2025-05-16T03:44:05.843397507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:05.846753 containerd[1487]: time="2025-05-16T03:44:05.846390981Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142747" May 16 03:44:05.848056 containerd[1487]: time="2025-05-16T03:44:05.847961557Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:05.851638 containerd[1487]: time="2025-05-16T03:44:05.851571307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:05.854128 containerd[1487]: time="2025-05-16T03:44:05.852718328Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.427128098s" May 16 03:44:05.854128 containerd[1487]: time="2025-05-16T03:44:05.852754857Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 16 03:44:13.120697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:13.121641 systemd[1]: kubelet.service: Consumed 288ms CPU time, 110.4M memory peak. May 16 03:44:13.134660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:44:13.196208 systemd[1]: Reload requested from client PID 2189 ('systemctl') (unit session-11.scope)... May 16 03:44:13.196264 systemd[1]: Reloading... May 16 03:44:13.321389 zram_generator::config[2235]: No configuration found. May 16 03:44:14.108563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 03:44:14.235206 systemd[1]: Reloading finished in 1038 ms. May 16 03:44:14.276305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 03:44:14.276419 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 03:44:14.276757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:14.276797 systemd[1]: kubelet.service: Consumed 118ms CPU time, 98.3M memory peak. May 16 03:44:14.279850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:44:14.461367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:14.475863 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 03:44:14.826224 kubelet[2301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 03:44:14.826224 kubelet[2301]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 03:44:14.826224 kubelet[2301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 03:44:14.826224 kubelet[2301]: I0516 03:44:14.826129 2301 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 03:44:15.304041 kubelet[2301]: I0516 03:44:15.303982 2301 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 03:44:15.304041 kubelet[2301]: I0516 03:44:15.304011 2301 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 03:44:15.304380 kubelet[2301]: I0516 03:44:15.304261 2301 server.go:956] "Client rotation is on, will bootstrap in background" May 16 03:44:15.341484 kubelet[2301]: I0516 03:44:15.339858 2301 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 03:44:15.342109 kubelet[2301]: E0516 03:44:15.342056 2301 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.212:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 03:44:15.353763 kubelet[2301]: I0516 03:44:15.353721 2301 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 03:44:15.358111 kubelet[2301]: I0516 03:44:15.358056 2301 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 03:44:15.358329 kubelet[2301]: I0516 03:44:15.358271 2301 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 03:44:15.358551 kubelet[2301]: I0516 03:44:15.358318 2301 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-184e873f92.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 03:44:15.358551 kubelet[2301]: I0516 03:44:15.358537 2301 topology_manager.go:138] "Creating topology manager with none policy" May 16 03:44:15.358551 kubelet[2301]: I0516 03:44:15.358550 2301 container_manager_linux.go:303] "Creating device plugin manager" May 16 03:44:15.360052 kubelet[2301]: I0516 03:44:15.360002 2301 state_mem.go:36] "Initialized new in-memory state store" May 16 03:44:15.363391 kubelet[2301]: I0516 03:44:15.363328 2301 kubelet.go:480] "Attempting to sync node with API server" May 16 03:44:15.363542 kubelet[2301]: I0516 03:44:15.363459 2301 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 03:44:15.363542 kubelet[2301]: I0516 03:44:15.363488 2301 kubelet.go:386] "Adding apiserver pod source" May 16 03:44:15.365672 kubelet[2301]: I0516 03:44:15.365492 2301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 03:44:15.385024 kubelet[2301]: E0516 03:44:15.384955 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-184e873f92.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 03:44:15.388140 kubelet[2301]: I0516 03:44:15.387885 2301 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 03:44:15.388140 kubelet[2301]: E0516 03:44:15.387884 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 03:44:15.388910 kubelet[2301]: I0516 03:44:15.388820 2301 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 03:44:15.391374 kubelet[2301]: W0516 03:44:15.390204 2301 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 03:44:15.397210 kubelet[2301]: I0516 03:44:15.397163 2301 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 03:44:15.397310 kubelet[2301]: I0516 03:44:15.397279 2301 server.go:1289] "Started kubelet" May 16 03:44:15.402044 kubelet[2301]: I0516 03:44:15.401983 2301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 03:44:15.406382 kubelet[2301]: E0516 03:44:15.404226 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.212:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-184e873f92.novalocal.183fe514352bc830 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-184e873f92.novalocal,UID:ci-4284-0-0-n-184e873f92.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-184e873f92.novalocal,},FirstTimestamp:2025-05-16 03:44:15.39721016 +0000 UTC m=+0.911041739,LastTimestamp:2025-05-16 03:44:15.39721016 +0000 UTC m=+0.911041739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-184e873f92.novalocal,}" May 16 03:44:15.407365 kubelet[2301]: I0516 03:44:15.406817 2301 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 03:44:15.407616 kubelet[2301]: I0516 03:44:15.407602 2301 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 03:44:15.407874 kubelet[2301]: E0516 03:44:15.407853 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" May 16 03:44:15.411813 kubelet[2301]: I0516 03:44:15.408160 2301 server.go:317] "Adding debug handlers to kubelet server" May 16 03:44:15.413044 kubelet[2301]: E0516 03:44:15.412984 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-184e873f92.novalocal?timeout=10s\": dial tcp 172.24.4.212:6443: connect: connection refused" interval="200ms" May 16 03:44:15.413111 kubelet[2301]: I0516 03:44:15.408932 2301 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 03:44:15.413179 kubelet[2301]: I0516 03:44:15.409049 2301 reconciler.go:26] "Reconciler: start to sync state" May 16 03:44:15.413247 kubelet[2301]: I0516 03:44:15.411300 2301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 03:44:15.413949 kubelet[2301]: I0516 03:44:15.408195 2301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 03:44:15.414589 kubelet[2301]: E0516 03:44:15.414568 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 03:44:15.415921 kubelet[2301]: I0516 03:44:15.415866 2301 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 03:44:15.416917 kubelet[2301]: I0516 03:44:15.416882 2301 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 03:44:15.418731 kubelet[2301]: E0516 03:44:15.418712 2301 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 03:44:15.420451 kubelet[2301]: I0516 03:44:15.419539 2301 factory.go:223] Registration of the containerd container factory successfully May 16 03:44:15.420451 kubelet[2301]: I0516 03:44:15.419618 2301 factory.go:223] Registration of the systemd container factory successfully May 16 03:44:15.445372 kubelet[2301]: I0516 03:44:15.445313 2301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 03:44:15.447237 kubelet[2301]: I0516 03:44:15.446938 2301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 03:44:15.447237 kubelet[2301]: I0516 03:44:15.446959 2301 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 03:44:15.447237 kubelet[2301]: I0516 03:44:15.446989 2301 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 03:44:15.447237 kubelet[2301]: I0516 03:44:15.446997 2301 kubelet.go:2436] "Starting kubelet main sync loop" May 16 03:44:15.447237 kubelet[2301]: E0516 03:44:15.447034 2301 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 03:44:15.449871 kubelet[2301]: E0516 03:44:15.449824 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 03:44:15.450741 kubelet[2301]: I0516 03:44:15.450727 2301 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 03:44:15.450856 kubelet[2301]: I0516 03:44:15.450844 2301 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 03:44:15.451180 kubelet[2301]: I0516 03:44:15.450925 2301 state_mem.go:36] "Initialized new in-memory state store" May 16 03:44:15.456547 kubelet[2301]: I0516 03:44:15.456525 2301 policy_none.go:49] "None policy: Start" May 16 03:44:15.456701 kubelet[2301]: I0516 03:44:15.456688 2301 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 03:44:15.456814 kubelet[2301]: I0516 03:44:15.456803 2301 state_mem.go:35] "Initializing new in-memory state store" May 16 03:44:15.465447 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 03:44:15.474996 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 03:44:15.478691 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 03:44:15.489040 kubelet[2301]: E0516 03:44:15.488998 2301 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 03:44:15.489743 kubelet[2301]: I0516 03:44:15.489188 2301 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 03:44:15.489743 kubelet[2301]: I0516 03:44:15.489206 2301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 03:44:15.489743 kubelet[2301]: I0516 03:44:15.489608 2301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 03:44:15.491314 kubelet[2301]: E0516 03:44:15.491223 2301 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 03:44:15.491314 kubelet[2301]: E0516 03:44:15.491265 2301 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" May 16 03:44:15.573627 systemd[1]: Created slice kubepods-burstable-podb7c564d0c959bab54c9b805b11aed82b.slice - libcontainer container kubepods-burstable-podb7c564d0c959bab54c9b805b11aed82b.slice. May 16 03:44:15.592837 kubelet[2301]: I0516 03:44:15.592269 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.593052 kubelet[2301]: E0516 03:44:15.592923 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.212:6443/api/v1/nodes\": dial tcp 172.24.4.212:6443: connect: connection refused" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.595334 kubelet[2301]: E0516 03:44:15.595299 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.606836 systemd[1]: Created slice kubepods-burstable-pod478890543362d602d15e41667df094c0.slice - libcontainer container kubepods-burstable-pod478890543362d602d15e41667df094c0.slice. May 16 03:44:15.612093 kubelet[2301]: E0516 03:44:15.611788 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.613856 kubelet[2301]: I0516 03:44:15.613713 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.613856 kubelet[2301]: E0516 03:44:15.613742 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-184e873f92.novalocal?timeout=10s\": dial tcp 172.24.4.212:6443: connect: connection refused" interval="400ms" May 16 03:44:15.613856 kubelet[2301]: I0516 03:44:15.613794 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.613856 kubelet[2301]: I0516 03:44:15.613848 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614207 kubelet[2301]: I0516 03:44:15.613897 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614207 kubelet[2301]: I0516 03:44:15.613940 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614207 kubelet[2301]: I0516 03:44:15.613984 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614207 kubelet[2301]: I0516 03:44:15.614027 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614505 kubelet[2301]: I0516 03:44:15.614073 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7c564d0c959bab54c9b805b11aed82b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"b7c564d0c959bab54c9b805b11aed82b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.614505 kubelet[2301]: I0516 03:44:15.614113 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.625581 systemd[1]: Created slice kubepods-burstable-podabb72c4ce5e8480e07d34b1d73547807.slice - libcontainer container kubepods-burstable-podabb72c4ce5e8480e07d34b1d73547807.slice. May 16 03:44:15.630297 kubelet[2301]: E0516 03:44:15.629887 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.796483 kubelet[2301]: I0516 03:44:15.796410 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.797276 kubelet[2301]: E0516 03:44:15.797173 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.212:6443/api/v1/nodes\": dial tcp 172.24.4.212:6443: connect: connection refused" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:15.899300 containerd[1487]: time="2025-05-16T03:44:15.898064793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal,Uid:b7c564d0c959bab54c9b805b11aed82b,Namespace:kube-system,Attempt:0,}" May 16 03:44:15.914372 containerd[1487]: time="2025-05-16T03:44:15.914265618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal,Uid:478890543362d602d15e41667df094c0,Namespace:kube-system,Attempt:0,}" May 16 03:44:15.931558 containerd[1487]: time="2025-05-16T03:44:15.931477559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal,Uid:abb72c4ce5e8480e07d34b1d73547807,Namespace:kube-system,Attempt:0,}" May 16 03:44:15.987398 containerd[1487]: time="2025-05-16T03:44:15.979923739Z" level=info msg="connecting to shim de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543" address="unix:///run/containerd/s/62e971ad0d9de6a5391adab092cfeec8f1eea058edb4b97ab00c7596b6ea02ba" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:16.000151 containerd[1487]: time="2025-05-16T03:44:16.000010502Z" level=info msg="connecting to shim 2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc" address="unix:///run/containerd/s/0be708300cd8dfb6cca8f97a0b398040832e7deb636dac6e9532e89f55cc1b31" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:16.017732 kubelet[2301]: E0516 03:44:16.017684 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-184e873f92.novalocal?timeout=10s\": dial tcp 172.24.4.212:6443: connect: connection refused" interval="800ms" May 16 03:44:16.050280 containerd[1487]: time="2025-05-16T03:44:16.050236311Z" level=info msg="connecting to shim a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989" address="unix:///run/containerd/s/3709248eafee66868cc12c24bd240485c16ba22430cc4337b66212bcd75af748" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:16.056149 systemd[1]: Started cri-containerd-de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543.scope - libcontainer container de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543. May 16 03:44:16.096583 systemd[1]: Started cri-containerd-2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc.scope - libcontainer container 2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc. May 16 03:44:16.111375 systemd[1]: Started cri-containerd-a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989.scope - libcontainer container a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989. May 16 03:44:16.168386 containerd[1487]: time="2025-05-16T03:44:16.167856377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal,Uid:b7c564d0c959bab54c9b805b11aed82b,Namespace:kube-system,Attempt:0,} returns sandbox id \"de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543\"" May 16 03:44:16.178364 containerd[1487]: time="2025-05-16T03:44:16.178037902Z" level=info msg="CreateContainer within sandbox \"de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 03:44:16.192578 containerd[1487]: time="2025-05-16T03:44:16.192539150Z" level=info msg="Container 891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:16.194128 containerd[1487]: time="2025-05-16T03:44:16.194098043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal,Uid:478890543362d602d15e41667df094c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc\"" May 16 03:44:16.199147 kubelet[2301]: I0516 03:44:16.199116 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:16.200384 kubelet[2301]: E0516 03:44:16.200211 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.212:6443/api/v1/nodes\": dial tcp 172.24.4.212:6443: connect: connection refused" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:16.203103 containerd[1487]: time="2025-05-16T03:44:16.203072174Z" level=info msg="CreateContainer within sandbox \"2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 03:44:16.209002 containerd[1487]: time="2025-05-16T03:44:16.208869768Z" level=info msg="CreateContainer within sandbox \"de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270\"" May 16 03:44:16.210423 containerd[1487]: time="2025-05-16T03:44:16.210358420Z" level=info msg="StartContainer for \"891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270\"" May 16 03:44:16.212115 containerd[1487]: time="2025-05-16T03:44:16.212089025Z" level=info msg="connecting to shim 891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270" address="unix:///run/containerd/s/62e971ad0d9de6a5391adab092cfeec8f1eea058edb4b97ab00c7596b6ea02ba" protocol=ttrpc version=3 May 16 03:44:16.215962 containerd[1487]: time="2025-05-16T03:44:16.215910923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal,Uid:abb72c4ce5e8480e07d34b1d73547807,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989\"" May 16 03:44:16.223824 containerd[1487]: time="2025-05-16T03:44:16.223764002Z" level=info msg="CreateContainer within sandbox \"a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 03:44:16.225308 containerd[1487]: time="2025-05-16T03:44:16.225260809Z" level=info msg="Container 1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:16.239776 containerd[1487]: time="2025-05-16T03:44:16.239690041Z" level=info msg="Container 30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:16.241810 systemd[1]: Started cri-containerd-891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270.scope - libcontainer container 891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270. May 16 03:44:16.247213 containerd[1487]: time="2025-05-16T03:44:16.247164369Z" level=info msg="CreateContainer within sandbox \"2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f\"" May 16 03:44:16.248071 containerd[1487]: time="2025-05-16T03:44:16.248045592Z" level=info msg="StartContainer for \"1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f\"" May 16 03:44:16.253815 containerd[1487]: time="2025-05-16T03:44:16.252424515Z" level=info msg="connecting to shim 1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f" address="unix:///run/containerd/s/0be708300cd8dfb6cca8f97a0b398040832e7deb636dac6e9532e89f55cc1b31" protocol=ttrpc version=3 May 16 03:44:16.254786 containerd[1487]: time="2025-05-16T03:44:16.254469901Z" level=info msg="CreateContainer within sandbox \"a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d\"" May 16 03:44:16.255963 containerd[1487]: time="2025-05-16T03:44:16.255674029Z" level=info msg="StartContainer for \"30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d\"" May 16 03:44:16.259949 containerd[1487]: time="2025-05-16T03:44:16.259462795Z" level=info msg="connecting to shim 30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d" address="unix:///run/containerd/s/3709248eafee66868cc12c24bd240485c16ba22430cc4337b66212bcd75af748" protocol=ttrpc version=3 May 16 03:44:16.282803 systemd[1]: Started cri-containerd-30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d.scope - libcontainer container 30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d. May 16 03:44:16.291779 systemd[1]: Started cri-containerd-1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f.scope - libcontainer container 1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f. May 16 03:44:16.329379 containerd[1487]: time="2025-05-16T03:44:16.328081599Z" level=info msg="StartContainer for \"891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270\" returns successfully" May 16 03:44:16.404367 containerd[1487]: time="2025-05-16T03:44:16.404102627Z" level=info msg="StartContainer for \"30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d\" returns successfully" May 16 03:44:16.405206 containerd[1487]: time="2025-05-16T03:44:16.405178144Z" level=info msg="StartContainer for \"1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f\" returns successfully" May 16 03:44:16.462651 kubelet[2301]: E0516 03:44:16.462493 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:16.472371 kubelet[2301]: E0516 03:44:16.471231 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:16.473137 kubelet[2301]: E0516 03:44:16.473111 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:17.006051 kubelet[2301]: I0516 03:44:17.006006 2301 kubelet_node_status.go:75] "Attempting to register node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:17.475649 kubelet[2301]: E0516 03:44:17.475613 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:17.477444 kubelet[2301]: E0516 03:44:17.476622 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:18.016041 kubelet[2301]: E0516 03:44:18.015815 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.108444 kubelet[2301]: E0516 03:44:19.108205 2301 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284-0-0-n-184e873f92.novalocal.183fe514352bc830 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-184e873f92.novalocal,UID:ci-4284-0-0-n-184e873f92.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-184e873f92.novalocal,},FirstTimestamp:2025-05-16 03:44:15.39721016 +0000 UTC m=+0.911041739,LastTimestamp:2025-05-16 03:44:15.39721016 +0000 UTC m=+0.911041739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-184e873f92.novalocal,}" May 16 03:44:19.109272 kubelet[2301]: I0516 03:44:19.108764 2301 kubelet_node_status.go:78] "Successfully registered node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.109272 kubelet[2301]: E0516 03:44:19.108788 2301 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4284-0-0-n-184e873f92.novalocal\": node \"ci-4284-0-0-n-184e873f92.novalocal\" not found" May 16 03:44:19.209059 kubelet[2301]: I0516 03:44:19.208984 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.389085 kubelet[2301]: I0516 03:44:19.388746 2301 apiserver.go:52] "Watching apiserver" May 16 03:44:19.414711 kubelet[2301]: I0516 03:44:19.414540 2301 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 03:44:19.470712 kubelet[2301]: E0516 03:44:19.470597 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.470712 kubelet[2301]: I0516 03:44:19.470671 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.485982 kubelet[2301]: E0516 03:44:19.485233 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.485982 kubelet[2301]: I0516 03:44:19.485300 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.492382 kubelet[2301]: I0516 03:44:19.490240 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.497735 kubelet[2301]: E0516 03:44:19.497676 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:19.502568 kubelet[2301]: E0516 03:44:19.502513 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:21.514870 systemd[1]: Reload requested from client PID 2571 ('systemctl') (unit session-11.scope)... May 16 03:44:21.515974 systemd[1]: Reloading... May 16 03:44:21.680380 zram_generator::config[2618]: No configuration found. May 16 03:44:21.839562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 03:44:21.987140 systemd[1]: Reloading finished in 469 ms. May 16 03:44:22.022676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:44:22.033889 systemd[1]: kubelet.service: Deactivated successfully. May 16 03:44:22.034198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:22.034331 systemd[1]: kubelet.service: Consumed 1.275s CPU time, 131.3M memory peak. May 16 03:44:22.037013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 03:44:22.325634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 03:44:22.347432 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 03:44:22.426916 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 03:44:22.426916 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 03:44:22.426916 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 03:44:22.426916 kubelet[2681]: I0516 03:44:22.425807 2681 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 03:44:22.437370 kubelet[2681]: I0516 03:44:22.436410 2681 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 03:44:22.437370 kubelet[2681]: I0516 03:44:22.436437 2681 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 03:44:22.437370 kubelet[2681]: I0516 03:44:22.436705 2681 server.go:956] "Client rotation is on, will bootstrap in background" May 16 03:44:22.438621 kubelet[2681]: I0516 03:44:22.438580 2681 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 03:44:22.441877 kubelet[2681]: I0516 03:44:22.441835 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 03:44:22.448270 kubelet[2681]: I0516 03:44:22.448230 2681 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 03:44:22.454571 kubelet[2681]: I0516 03:44:22.454529 2681 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 03:44:22.454779 kubelet[2681]: I0516 03:44:22.454745 2681 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 03:44:22.455067 kubelet[2681]: I0516 03:44:22.454777 2681 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-184e873f92.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 03:44:22.455172 kubelet[2681]: I0516 03:44:22.455073 2681 topology_manager.go:138] "Creating topology manager with none policy" May 16 03:44:22.455172 kubelet[2681]: I0516 03:44:22.455086 2681 container_manager_linux.go:303] "Creating device plugin manager" May 16 03:44:22.455172 kubelet[2681]: I0516 03:44:22.455125 2681 state_mem.go:36] "Initialized new in-memory state store" May 16 03:44:22.455304 kubelet[2681]: I0516 03:44:22.455283 2681 kubelet.go:480] "Attempting to sync node with API server" May 16 03:44:22.455304 kubelet[2681]: I0516 03:44:22.455305 2681 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 03:44:22.455458 kubelet[2681]: I0516 03:44:22.455326 2681 kubelet.go:386] "Adding apiserver pod source" May 16 03:44:22.456689 kubelet[2681]: I0516 03:44:22.456475 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 03:44:22.465222 kubelet[2681]: I0516 03:44:22.464857 2681 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 03:44:22.465786 kubelet[2681]: I0516 03:44:22.465667 2681 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 03:44:22.480872 kubelet[2681]: I0516 03:44:22.480827 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 03:44:22.480872 kubelet[2681]: I0516 03:44:22.480888 2681 server.go:1289] "Started kubelet" May 16 03:44:22.489390 kubelet[2681]: I0516 03:44:22.488193 2681 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 03:44:22.489390 kubelet[2681]: I0516 03:44:22.489241 2681 server.go:317] "Adding debug handlers to kubelet server" May 16 03:44:22.495925 kubelet[2681]: I0516 03:44:22.495851 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 03:44:22.496381 kubelet[2681]: I0516 03:44:22.496216 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 03:44:22.497428 kubelet[2681]: I0516 03:44:22.497073 2681 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 03:44:22.498929 kubelet[2681]: I0516 03:44:22.498288 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 03:44:22.501406 kubelet[2681]: I0516 03:44:22.501332 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 03:44:22.502210 kubelet[2681]: I0516 03:44:22.502180 2681 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 03:44:22.502402 kubelet[2681]: I0516 03:44:22.502381 2681 reconciler.go:26] "Reconciler: start to sync state" May 16 03:44:22.504191 kubelet[2681]: I0516 03:44:22.504167 2681 factory.go:223] Registration of the systemd container factory successfully May 16 03:44:22.504451 kubelet[2681]: I0516 03:44:22.504428 2681 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 03:44:22.512381 kubelet[2681]: E0516 03:44:22.512309 2681 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 03:44:22.512563 kubelet[2681]: I0516 03:44:22.512539 2681 factory.go:223] Registration of the containerd container factory successfully May 16 03:44:22.563174 kubelet[2681]: I0516 03:44:22.561620 2681 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 03:44:22.567607 kubelet[2681]: I0516 03:44:22.566057 2681 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 03:44:22.567607 kubelet[2681]: I0516 03:44:22.566085 2681 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 03:44:22.567607 kubelet[2681]: I0516 03:44:22.566111 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 03:44:22.567607 kubelet[2681]: I0516 03:44:22.566120 2681 kubelet.go:2436] "Starting kubelet main sync loop" May 16 03:44:22.567607 kubelet[2681]: E0516 03:44:22.566171 2681 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 03:44:22.616440 kubelet[2681]: I0516 03:44:22.616250 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.617736 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.617783 2681 state_mem.go:36] "Initialized new in-memory state store" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.617954 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.617967 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.617987 2681 policy_none.go:49] "None policy: Start" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.618007 2681 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.618021 2681 state_mem.go:35] "Initializing new in-memory state store" May 16 03:44:22.618990 kubelet[2681]: I0516 03:44:22.618151 2681 state_mem.go:75] "Updated machine memory state" May 16 03:44:22.626697 kubelet[2681]: E0516 03:44:22.626652 2681 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 03:44:22.626858 kubelet[2681]: I0516 03:44:22.626835 2681 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 03:44:22.626906 kubelet[2681]: I0516 03:44:22.626856 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 03:44:22.629710 kubelet[2681]: I0516 03:44:22.629675 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 03:44:22.637414 kubelet[2681]: E0516 03:44:22.636276 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 03:44:22.668149 kubelet[2681]: I0516 03:44:22.667523 2681 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.668672 kubelet[2681]: I0516 03:44:22.667971 2681 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.668952 kubelet[2681]: I0516 03:44:22.668110 2681 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.677074 kubelet[2681]: I0516 03:44:22.677042 2681 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 16 03:44:22.680395 kubelet[2681]: I0516 03:44:22.680367 2681 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 16 03:44:22.682362 kubelet[2681]: I0516 03:44:22.682311 2681 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 16 03:44:22.744681 kubelet[2681]: I0516 03:44:22.744646 2681 kubelet_node_status.go:75] "Attempting to register node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.760866 kubelet[2681]: I0516 03:44:22.760815 2681 kubelet_node_status.go:124] "Node was previously registered" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.761058 kubelet[2681]: I0516 03:44:22.760912 2681 kubelet_node_status.go:78] "Successfully registered node" node="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804099 kubelet[2681]: I0516 03:44:22.803835 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7c564d0c959bab54c9b805b11aed82b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"b7c564d0c959bab54c9b805b11aed82b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804099 kubelet[2681]: I0516 03:44:22.803880 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804099 kubelet[2681]: I0516 03:44:22.803910 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804099 kubelet[2681]: I0516 03:44:22.803932 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804392 kubelet[2681]: I0516 03:44:22.803953 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/478890543362d602d15e41667df094c0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"478890543362d602d15e41667df094c0\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804392 kubelet[2681]: I0516 03:44:22.803971 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804392 kubelet[2681]: I0516 03:44:22.803989 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.804392 kubelet[2681]: I0516 03:44:22.804007 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:22.805824 kubelet[2681]: I0516 03:44:22.805043 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abb72c4ce5e8480e07d34b1d73547807-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal\" (UID: \"abb72c4ce5e8480e07d34b1d73547807\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:23.462603 kubelet[2681]: I0516 03:44:23.462527 2681 apiserver.go:52] "Watching apiserver" May 16 03:44:23.502552 kubelet[2681]: I0516 03:44:23.502449 2681 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 03:44:23.603537 kubelet[2681]: I0516 03:44:23.602518 2681 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:23.603537 kubelet[2681]: I0516 03:44:23.603236 2681 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:23.622937 kubelet[2681]: I0516 03:44:23.622742 2681 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 16 03:44:23.623122 kubelet[2681]: E0516 03:44:23.623077 2681 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:23.628444 kubelet[2681]: I0516 03:44:23.628397 2681 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 16 03:44:23.628536 kubelet[2681]: E0516 03:44:23.628512 2681 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" May 16 03:44:23.671351 kubelet[2681]: I0516 03:44:23.671181 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-184e873f92.novalocal" podStartSLOduration=1.671164463 podStartE2EDuration="1.671164463s" podCreationTimestamp="2025-05-16 03:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 03:44:23.670889853 +0000 UTC m=+1.314388721" watchObservedRunningTime="2025-05-16 03:44:23.671164463 +0000 UTC m=+1.314663331" May 16 03:44:23.698116 kubelet[2681]: I0516 03:44:23.697973 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-184e873f92.novalocal" podStartSLOduration=1.697956342 podStartE2EDuration="1.697956342s" podCreationTimestamp="2025-05-16 03:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 03:44:23.696853101 +0000 UTC m=+1.340351989" watchObservedRunningTime="2025-05-16 03:44:23.697956342 +0000 UTC m=+1.341455200" May 16 03:44:23.699238 kubelet[2681]: I0516 03:44:23.698576 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-184e873f92.novalocal" podStartSLOduration=1.6985675869999999 podStartE2EDuration="1.698567587s" podCreationTimestamp="2025-05-16 03:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 03:44:23.684777804 +0000 UTC m=+1.328276682" watchObservedRunningTime="2025-05-16 03:44:23.698567587 +0000 UTC m=+1.342066445" May 16 03:44:27.353725 kubelet[2681]: I0516 03:44:27.351373 2681 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 03:44:27.362021 kubelet[2681]: I0516 03:44:27.354700 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 03:44:27.362240 containerd[1487]: time="2025-05-16T03:44:27.352992484Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 03:44:28.332641 systemd[1]: Created slice kubepods-besteffort-pod53bbc41e_bd75_49bd_bb20_acd75baeaa8e.slice - libcontainer container kubepods-besteffort-pod53bbc41e_bd75_49bd_bb20_acd75baeaa8e.slice. May 16 03:44:28.346855 kubelet[2681]: I0516 03:44:28.346622 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53bbc41e-bd75-49bd-bb20-acd75baeaa8e-kube-proxy\") pod \"kube-proxy-qn52c\" (UID: \"53bbc41e-bd75-49bd-bb20-acd75baeaa8e\") " pod="kube-system/kube-proxy-qn52c" May 16 03:44:28.346855 kubelet[2681]: I0516 03:44:28.346711 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53bbc41e-bd75-49bd-bb20-acd75baeaa8e-xtables-lock\") pod \"kube-proxy-qn52c\" (UID: \"53bbc41e-bd75-49bd-bb20-acd75baeaa8e\") " pod="kube-system/kube-proxy-qn52c" May 16 03:44:28.346855 kubelet[2681]: I0516 03:44:28.346746 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53bbc41e-bd75-49bd-bb20-acd75baeaa8e-lib-modules\") pod \"kube-proxy-qn52c\" (UID: \"53bbc41e-bd75-49bd-bb20-acd75baeaa8e\") " pod="kube-system/kube-proxy-qn52c" May 16 03:44:28.346855 kubelet[2681]: I0516 03:44:28.346769 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2cg\" (UniqueName: \"kubernetes.io/projected/53bbc41e-bd75-49bd-bb20-acd75baeaa8e-kube-api-access-rz2cg\") pod \"kube-proxy-qn52c\" (UID: \"53bbc41e-bd75-49bd-bb20-acd75baeaa8e\") " pod="kube-system/kube-proxy-qn52c" May 16 03:44:28.526868 systemd[1]: Created slice kubepods-besteffort-pod22a7aca1_b55b_4f4d_927f_2bd3c8019d92.slice - libcontainer container kubepods-besteffort-pod22a7aca1_b55b_4f4d_927f_2bd3c8019d92.slice. May 16 03:44:28.547678 kubelet[2681]: I0516 03:44:28.547632 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4pw\" (UniqueName: \"kubernetes.io/projected/22a7aca1-b55b-4f4d-927f-2bd3c8019d92-kube-api-access-9l4pw\") pod \"tigera-operator-844669ff44-5ktlh\" (UID: \"22a7aca1-b55b-4f4d-927f-2bd3c8019d92\") " pod="tigera-operator/tigera-operator-844669ff44-5ktlh" May 16 03:44:28.548180 kubelet[2681]: I0516 03:44:28.548121 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22a7aca1-b55b-4f4d-927f-2bd3c8019d92-var-lib-calico\") pod \"tigera-operator-844669ff44-5ktlh\" (UID: \"22a7aca1-b55b-4f4d-927f-2bd3c8019d92\") " pod="tigera-operator/tigera-operator-844669ff44-5ktlh" May 16 03:44:28.643810 containerd[1487]: time="2025-05-16T03:44:28.642518768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn52c,Uid:53bbc41e-bd75-49bd-bb20-acd75baeaa8e,Namespace:kube-system,Attempt:0,}" May 16 03:44:28.712874 containerd[1487]: time="2025-05-16T03:44:28.712781730Z" level=info msg="connecting to shim 5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b" address="unix:///run/containerd/s/c0c931a25a582fa1c8c6bef74080c8a160e5fbd91ea0f2ee11e24004f0cd02b6" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:28.747218 systemd[1]: Started cri-containerd-5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b.scope - libcontainer container 5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b. May 16 03:44:28.774183 containerd[1487]: time="2025-05-16T03:44:28.774145544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn52c,Uid:53bbc41e-bd75-49bd-bb20-acd75baeaa8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b\"" May 16 03:44:28.785329 containerd[1487]: time="2025-05-16T03:44:28.785102042Z" level=info msg="CreateContainer within sandbox \"5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 03:44:28.800752 containerd[1487]: time="2025-05-16T03:44:28.798891185Z" level=info msg="Container 2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:28.814196 containerd[1487]: time="2025-05-16T03:44:28.814094202Z" level=info msg="CreateContainer within sandbox \"5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066\"" May 16 03:44:28.815924 containerd[1487]: time="2025-05-16T03:44:28.815722849Z" level=info msg="StartContainer for \"2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066\"" May 16 03:44:28.817821 containerd[1487]: time="2025-05-16T03:44:28.817795589Z" level=info msg="connecting to shim 2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066" address="unix:///run/containerd/s/c0c931a25a582fa1c8c6bef74080c8a160e5fbd91ea0f2ee11e24004f0cd02b6" protocol=ttrpc version=3 May 16 03:44:28.830713 containerd[1487]: time="2025-05-16T03:44:28.830648699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-5ktlh,Uid:22a7aca1-b55b-4f4d-927f-2bd3c8019d92,Namespace:tigera-operator,Attempt:0,}" May 16 03:44:28.838580 systemd[1]: Started cri-containerd-2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066.scope - libcontainer container 2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066. May 16 03:44:28.869141 containerd[1487]: time="2025-05-16T03:44:28.869094861Z" level=info msg="connecting to shim ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740" address="unix:///run/containerd/s/b98a618e1a1dca7d57ca85e540e7bbad8ce2958a4c4e7d0d91dfae9bbab58415" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:28.910854 systemd[1]: Started cri-containerd-ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740.scope - libcontainer container ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740. May 16 03:44:28.916753 containerd[1487]: time="2025-05-16T03:44:28.916404900Z" level=info msg="StartContainer for \"2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066\" returns successfully" May 16 03:44:28.981106 containerd[1487]: time="2025-05-16T03:44:28.980804691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-5ktlh,Uid:22a7aca1-b55b-4f4d-927f-2bd3c8019d92,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740\"" May 16 03:44:28.983613 containerd[1487]: time="2025-05-16T03:44:28.983569033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 03:44:29.648053 kubelet[2681]: I0516 03:44:29.647804 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qn52c" podStartSLOduration=1.647767596 podStartE2EDuration="1.647767596s" podCreationTimestamp="2025-05-16 03:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 03:44:29.647169392 +0000 UTC m=+7.290668300" watchObservedRunningTime="2025-05-16 03:44:29.647767596 +0000 UTC m=+7.291266504" May 16 03:44:30.922525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135172126.mount: Deactivated successfully. May 16 03:44:31.651825 containerd[1487]: time="2025-05-16T03:44:31.651777736Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:31.653367 containerd[1487]: time="2025-05-16T03:44:31.653298195Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 16 03:44:31.654821 containerd[1487]: time="2025-05-16T03:44:31.654772793Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:31.657606 containerd[1487]: time="2025-05-16T03:44:31.657542981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:31.659013 containerd[1487]: time="2025-05-16T03:44:31.658967281Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.675356537s" May 16 03:44:31.659013 containerd[1487]: time="2025-05-16T03:44:31.659003653Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 16 03:44:31.666036 containerd[1487]: time="2025-05-16T03:44:31.665988948Z" level=info msg="CreateContainer within sandbox \"ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 03:44:31.679397 containerd[1487]: time="2025-05-16T03:44:31.678779968Z" level=info msg="Container fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:31.689290 containerd[1487]: time="2025-05-16T03:44:31.689253474Z" level=info msg="CreateContainer within sandbox \"ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6\"" May 16 03:44:31.690167 containerd[1487]: time="2025-05-16T03:44:31.690144183Z" level=info msg="StartContainer for \"fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6\"" May 16 03:44:31.691192 containerd[1487]: time="2025-05-16T03:44:31.691152902Z" level=info msg="connecting to shim fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6" address="unix:///run/containerd/s/b98a618e1a1dca7d57ca85e540e7bbad8ce2958a4c4e7d0d91dfae9bbab58415" protocol=ttrpc version=3 May 16 03:44:31.718481 systemd[1]: Started cri-containerd-fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6.scope - libcontainer container fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6. May 16 03:44:31.762818 containerd[1487]: time="2025-05-16T03:44:31.762765108Z" level=info msg="StartContainer for \"fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6\" returns successfully" May 16 03:44:33.328169 kubelet[2681]: I0516 03:44:33.327765 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-5ktlh" podStartSLOduration=2.650026557 podStartE2EDuration="5.327747443s" podCreationTimestamp="2025-05-16 03:44:28 +0000 UTC" firstStartedPulling="2025-05-16 03:44:28.982169247 +0000 UTC m=+6.625668105" lastFinishedPulling="2025-05-16 03:44:31.659890133 +0000 UTC m=+9.303388991" observedRunningTime="2025-05-16 03:44:32.841747713 +0000 UTC m=+10.485246621" watchObservedRunningTime="2025-05-16 03:44:33.327747443 +0000 UTC m=+10.971246301" May 16 03:44:38.909817 sudo[1761]: pam_unix(sudo:session): session closed for user root May 16 03:44:39.185056 sshd[1760]: Connection closed by 172.24.4.1 port 50270 May 16 03:44:39.184860 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 16 03:44:39.193376 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. May 16 03:44:39.193972 systemd[1]: sshd@8-172.24.4.212:22-172.24.4.1:50270.service: Deactivated successfully. May 16 03:44:39.198273 systemd[1]: session-11.scope: Deactivated successfully. May 16 03:44:39.200534 systemd[1]: session-11.scope: Consumed 10.804s CPU time, 230M memory peak. May 16 03:44:39.203043 systemd-logind[1465]: Removed session 11. May 16 03:44:43.091100 systemd[1]: Created slice kubepods-besteffort-pod3eaa1d31_68a8_4ada_af76_19a3cc933c70.slice - libcontainer container kubepods-besteffort-pod3eaa1d31_68a8_4ada_af76_19a3cc933c70.slice. May 16 03:44:43.150992 kubelet[2681]: I0516 03:44:43.150863 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3eaa1d31-68a8-4ada-af76-19a3cc933c70-typha-certs\") pod \"calico-typha-66c6448fc4-rxzgn\" (UID: \"3eaa1d31-68a8-4ada-af76-19a3cc933c70\") " pod="calico-system/calico-typha-66c6448fc4-rxzgn" May 16 03:44:43.150992 kubelet[2681]: I0516 03:44:43.150970 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9sw\" (UniqueName: \"kubernetes.io/projected/3eaa1d31-68a8-4ada-af76-19a3cc933c70-kube-api-access-dv9sw\") pod \"calico-typha-66c6448fc4-rxzgn\" (UID: \"3eaa1d31-68a8-4ada-af76-19a3cc933c70\") " pod="calico-system/calico-typha-66c6448fc4-rxzgn" May 16 03:44:43.150992 kubelet[2681]: I0516 03:44:43.151012 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eaa1d31-68a8-4ada-af76-19a3cc933c70-tigera-ca-bundle\") pod \"calico-typha-66c6448fc4-rxzgn\" (UID: \"3eaa1d31-68a8-4ada-af76-19a3cc933c70\") " pod="calico-system/calico-typha-66c6448fc4-rxzgn" May 16 03:44:43.410184 containerd[1487]: time="2025-05-16T03:44:43.409859691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c6448fc4-rxzgn,Uid:3eaa1d31-68a8-4ada-af76-19a3cc933c70,Namespace:calico-system,Attempt:0,}" May 16 03:44:43.481771 containerd[1487]: time="2025-05-16T03:44:43.481564146Z" level=info msg="connecting to shim ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f" address="unix:///run/containerd/s/49ed2403e64efba8ac230594c2b0739e8f847480ff44468f97f80a53c005978a" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:43.526934 systemd[1]: Created slice kubepods-besteffort-pod0a9d5c0d_7800_4184_b570_09ae6df793f8.slice - libcontainer container kubepods-besteffort-pod0a9d5c0d_7800_4184_b570_09ae6df793f8.slice. May 16 03:44:43.554257 kubelet[2681]: I0516 03:44:43.554159 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcgt\" (UniqueName: \"kubernetes.io/projected/0a9d5c0d-7800-4184-b570-09ae6df793f8-kube-api-access-vvcgt\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.555718 kubelet[2681]: I0516 03:44:43.555664 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-flexvol-driver-host\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.556071 kubelet[2681]: I0516 03:44:43.556019 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-xtables-lock\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.556357 kubelet[2681]: I0516 03:44:43.556294 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-cni-net-dir\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.556586 kubelet[2681]: I0516 03:44:43.556541 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a9d5c0d-7800-4184-b570-09ae6df793f8-node-certs\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.556790 kubelet[2681]: I0516 03:44:43.556739 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a9d5c0d-7800-4184-b570-09ae6df793f8-tigera-ca-bundle\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557154 kubelet[2681]: I0516 03:44:43.556893 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-var-run-calico\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557154 kubelet[2681]: I0516 03:44:43.556935 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-policysync\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557154 kubelet[2681]: I0516 03:44:43.556987 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-cni-bin-dir\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557154 kubelet[2681]: I0516 03:44:43.557012 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-lib-modules\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557154 kubelet[2681]: I0516 03:44:43.557036 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-var-lib-calico\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.557443 kubelet[2681]: I0516 03:44:43.557060 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a9d5c0d-7800-4184-b570-09ae6df793f8-cni-log-dir\") pod \"calico-node-dwsmq\" (UID: \"0a9d5c0d-7800-4184-b570-09ae6df793f8\") " pod="calico-system/calico-node-dwsmq" May 16 03:44:43.563124 systemd[1]: Started cri-containerd-ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f.scope - libcontainer container ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f. May 16 03:44:43.662577 kubelet[2681]: E0516 03:44:43.660645 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.662577 kubelet[2681]: W0516 03:44:43.660691 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.662577 kubelet[2681]: E0516 03:44:43.660718 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.663365 kubelet[2681]: E0516 03:44:43.663112 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.663365 kubelet[2681]: W0516 03:44:43.663144 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.663365 kubelet[2681]: E0516 03:44:43.663163 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.663760 kubelet[2681]: E0516 03:44:43.663631 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.664287 kubelet[2681]: W0516 03:44:43.663868 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.664287 kubelet[2681]: E0516 03:44:43.663894 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.671652 kubelet[2681]: E0516 03:44:43.671600 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.671652 kubelet[2681]: W0516 03:44:43.671630 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.671652 kubelet[2681]: E0516 03:44:43.671651 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.673982 kubelet[2681]: E0516 03:44:43.672278 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.673982 kubelet[2681]: W0516 03:44:43.672295 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.673982 kubelet[2681]: E0516 03:44:43.672307 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.674145 kubelet[2681]: E0516 03:44:43.673988 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.674145 kubelet[2681]: W0516 03:44:43.674002 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.674145 kubelet[2681]: E0516 03:44:43.674015 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.688013 kubelet[2681]: E0516 03:44:43.687953 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.688013 kubelet[2681]: W0516 03:44:43.687982 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.688013 kubelet[2681]: E0516 03:44:43.688002 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.741704 containerd[1487]: time="2025-05-16T03:44:43.741632740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c6448fc4-rxzgn,Uid:3eaa1d31-68a8-4ada-af76-19a3cc933c70,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f\"" May 16 03:44:43.745914 containerd[1487]: time="2025-05-16T03:44:43.745563902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 03:44:43.780461 kubelet[2681]: E0516 03:44:43.780322 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:43.832145 containerd[1487]: time="2025-05-16T03:44:43.832091751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwsmq,Uid:0a9d5c0d-7800-4184-b570-09ae6df793f8,Namespace:calico-system,Attempt:0,}" May 16 03:44:43.838055 kubelet[2681]: E0516 03:44:43.837830 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.838055 kubelet[2681]: W0516 03:44:43.837857 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.838055 kubelet[2681]: E0516 03:44:43.837881 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.838484 kubelet[2681]: E0516 03:44:43.838362 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.838484 kubelet[2681]: W0516 03:44:43.838375 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.838484 kubelet[2681]: E0516 03:44:43.838386 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.838858 kubelet[2681]: E0516 03:44:43.838605 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.838858 kubelet[2681]: W0516 03:44:43.838615 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.838858 kubelet[2681]: E0516 03:44:43.838624 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.839089 kubelet[2681]: E0516 03:44:43.839075 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.839302 kubelet[2681]: W0516 03:44:43.839174 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.839302 kubelet[2681]: E0516 03:44:43.839191 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.839615 kubelet[2681]: E0516 03:44:43.839601 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.839820 kubelet[2681]: W0516 03:44:43.839720 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.839820 kubelet[2681]: E0516 03:44:43.839736 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.840393 kubelet[2681]: E0516 03:44:43.840378 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.840559 kubelet[2681]: W0516 03:44:43.840461 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.840559 kubelet[2681]: E0516 03:44:43.840477 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.841160 kubelet[2681]: E0516 03:44:43.840828 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.841373 kubelet[2681]: W0516 03:44:43.840840 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.841373 kubelet[2681]: E0516 03:44:43.841228 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.841960 kubelet[2681]: E0516 03:44:43.841580 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.841960 kubelet[2681]: W0516 03:44:43.841592 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.841960 kubelet[2681]: E0516 03:44:43.841602 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.842403 kubelet[2681]: E0516 03:44:43.842389 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.842468 kubelet[2681]: W0516 03:44:43.842457 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.843189 kubelet[2681]: E0516 03:44:43.842779 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.843189 kubelet[2681]: E0516 03:44:43.843047 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.843189 kubelet[2681]: W0516 03:44:43.843057 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.843189 kubelet[2681]: E0516 03:44:43.843067 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.843798 kubelet[2681]: E0516 03:44:43.843675 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.843798 kubelet[2681]: W0516 03:44:43.843689 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.843798 kubelet[2681]: E0516 03:44:43.843698 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.844028 kubelet[2681]: E0516 03:44:43.844010 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.844424 kubelet[2681]: W0516 03:44:43.844083 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.844424 kubelet[2681]: E0516 03:44:43.844102 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.844690 kubelet[2681]: E0516 03:44:43.844673 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.844846 kubelet[2681]: W0516 03:44:43.844799 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.844926 kubelet[2681]: E0516 03:44:43.844909 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.845432 kubelet[2681]: E0516 03:44:43.845415 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.845957 kubelet[2681]: W0516 03:44:43.845510 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.845957 kubelet[2681]: E0516 03:44:43.845528 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.847583 kubelet[2681]: E0516 03:44:43.846390 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.847583 kubelet[2681]: W0516 03:44:43.846409 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.847583 kubelet[2681]: E0516 03:44:43.846420 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.848263 kubelet[2681]: E0516 03:44:43.848099 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.848263 kubelet[2681]: W0516 03:44:43.848119 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.848263 kubelet[2681]: E0516 03:44:43.848132 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.848527 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.852614 kubelet[2681]: W0516 03:44:43.848537 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.848552 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.848809 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.852614 kubelet[2681]: W0516 03:44:43.848823 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.848834 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.849095 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.852614 kubelet[2681]: W0516 03:44:43.849106 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.849116 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.852614 kubelet[2681]: E0516 03:44:43.849568 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.853109 kubelet[2681]: W0516 03:44:43.849581 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.853109 kubelet[2681]: E0516 03:44:43.849595 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.859751 kubelet[2681]: E0516 03:44:43.859509 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.859751 kubelet[2681]: W0516 03:44:43.859540 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.859751 kubelet[2681]: E0516 03:44:43.859565 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.859751 kubelet[2681]: I0516 03:44:43.859601 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d89f57b-12d9-441c-854f-90be519acbd7-varrun\") pod \"csi-node-driver-nkgqf\" (UID: \"2d89f57b-12d9-441c-854f-90be519acbd7\") " pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:43.861886 kubelet[2681]: E0516 03:44:43.859909 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.861886 kubelet[2681]: W0516 03:44:43.859925 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.861886 kubelet[2681]: E0516 03:44:43.859940 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.861886 kubelet[2681]: I0516 03:44:43.859972 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xcbx\" (UniqueName: \"kubernetes.io/projected/2d89f57b-12d9-441c-854f-90be519acbd7-kube-api-access-6xcbx\") pod \"csi-node-driver-nkgqf\" (UID: \"2d89f57b-12d9-441c-854f-90be519acbd7\") " pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:43.861886 kubelet[2681]: E0516 03:44:43.860377 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.861886 kubelet[2681]: W0516 03:44:43.860406 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.861886 kubelet[2681]: E0516 03:44:43.860429 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.861886 kubelet[2681]: E0516 03:44:43.860686 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.861886 kubelet[2681]: W0516 03:44:43.860700 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.862281 kubelet[2681]: E0516 03:44:43.860711 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.862281 kubelet[2681]: E0516 03:44:43.860955 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.862281 kubelet[2681]: W0516 03:44:43.860968 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.862281 kubelet[2681]: E0516 03:44:43.860981 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.863248 kubelet[2681]: I0516 03:44:43.862757 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d89f57b-12d9-441c-854f-90be519acbd7-kubelet-dir\") pod \"csi-node-driver-nkgqf\" (UID: \"2d89f57b-12d9-441c-854f-90be519acbd7\") " pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:43.863769 kubelet[2681]: E0516 03:44:43.863741 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.864529 kubelet[2681]: W0516 03:44:43.864210 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.864529 kubelet[2681]: E0516 03:44:43.864232 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.866619 kubelet[2681]: E0516 03:44:43.865599 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.866619 kubelet[2681]: W0516 03:44:43.865625 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.866619 kubelet[2681]: E0516 03:44:43.865636 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.868829 kubelet[2681]: E0516 03:44:43.867652 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.868829 kubelet[2681]: W0516 03:44:43.867684 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.868829 kubelet[2681]: E0516 03:44:43.867706 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.868829 kubelet[2681]: I0516 03:44:43.867974 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d89f57b-12d9-441c-854f-90be519acbd7-registration-dir\") pod \"csi-node-driver-nkgqf\" (UID: \"2d89f57b-12d9-441c-854f-90be519acbd7\") " pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:43.868829 kubelet[2681]: E0516 03:44:43.868498 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.868829 kubelet[2681]: W0516 03:44:43.868526 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.868829 kubelet[2681]: E0516 03:44:43.868550 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.869517 kubelet[2681]: E0516 03:44:43.869493 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.869517 kubelet[2681]: W0516 03:44:43.869508 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.869517 kubelet[2681]: E0516 03:44:43.869519 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.869969 kubelet[2681]: E0516 03:44:43.869939 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.869969 kubelet[2681]: W0516 03:44:43.869954 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.869969 kubelet[2681]: E0516 03:44:43.869967 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.870219 kubelet[2681]: I0516 03:44:43.870194 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d89f57b-12d9-441c-854f-90be519acbd7-socket-dir\") pod \"csi-node-driver-nkgqf\" (UID: \"2d89f57b-12d9-441c-854f-90be519acbd7\") " pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:43.871093 kubelet[2681]: E0516 03:44:43.871067 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.871093 kubelet[2681]: W0516 03:44:43.871087 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.871291 kubelet[2681]: E0516 03:44:43.871099 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.872442 kubelet[2681]: E0516 03:44:43.872190 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.872442 kubelet[2681]: W0516 03:44:43.872209 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.872442 kubelet[2681]: E0516 03:44:43.872221 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.873302 kubelet[2681]: E0516 03:44:43.873195 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.873302 kubelet[2681]: W0516 03:44:43.873213 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.873302 kubelet[2681]: E0516 03:44:43.873227 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.873978 kubelet[2681]: E0516 03:44:43.873950 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.873978 kubelet[2681]: W0516 03:44:43.873970 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.874274 kubelet[2681]: E0516 03:44:43.873982 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.884047 containerd[1487]: time="2025-05-16T03:44:43.882269252Z" level=info msg="connecting to shim cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579" address="unix:///run/containerd/s/669fd5911b464316174b07f75c0a3f868332dc685d6d5478a33277c771302323" namespace=k8s.io protocol=ttrpc version=3 May 16 03:44:43.934692 systemd[1]: Started cri-containerd-cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579.scope - libcontainer container cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579. May 16 03:44:43.972417 kubelet[2681]: E0516 03:44:43.972380 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.972672 kubelet[2681]: W0516 03:44:43.972633 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.972672 kubelet[2681]: E0516 03:44:43.972665 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.974375 kubelet[2681]: E0516 03:44:43.974270 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.974375 kubelet[2681]: W0516 03:44:43.974324 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.974803 kubelet[2681]: E0516 03:44:43.974381 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.974803 kubelet[2681]: E0516 03:44:43.974705 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.974803 kubelet[2681]: W0516 03:44:43.974716 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.974803 kubelet[2681]: E0516 03:44:43.974726 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.975534 kubelet[2681]: E0516 03:44:43.974994 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.975534 kubelet[2681]: W0516 03:44:43.975009 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.975534 kubelet[2681]: E0516 03:44:43.975019 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.975917 kubelet[2681]: E0516 03:44:43.975753 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.975917 kubelet[2681]: W0516 03:44:43.975780 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.975917 kubelet[2681]: E0516 03:44:43.975801 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.976396 kubelet[2681]: E0516 03:44:43.976327 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.976564 kubelet[2681]: W0516 03:44:43.976494 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.976855 kubelet[2681]: E0516 03:44:43.976637 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.977024 kubelet[2681]: E0516 03:44:43.976996 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.977127 kubelet[2681]: W0516 03:44:43.977114 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.977422 kubelet[2681]: E0516 03:44:43.977286 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.977749 kubelet[2681]: E0516 03:44:43.977643 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.977826 kubelet[2681]: W0516 03:44:43.977810 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.977892 kubelet[2681]: E0516 03:44:43.977880 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.978373 kubelet[2681]: E0516 03:44:43.978323 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.979130 kubelet[2681]: W0516 03:44:43.978959 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.979130 kubelet[2681]: E0516 03:44:43.978980 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.979325 kubelet[2681]: E0516 03:44:43.979309 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.979519 kubelet[2681]: W0516 03:44:43.979503 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.979641 kubelet[2681]: E0516 03:44:43.979624 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.980328 kubelet[2681]: E0516 03:44:43.980196 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.980328 kubelet[2681]: W0516 03:44:43.980214 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.980328 kubelet[2681]: E0516 03:44:43.980225 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.981078 kubelet[2681]: E0516 03:44:43.980912 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.981078 kubelet[2681]: W0516 03:44:43.980930 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.981078 kubelet[2681]: E0516 03:44:43.980940 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.981533 kubelet[2681]: E0516 03:44:43.981447 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.981642 kubelet[2681]: W0516 03:44:43.981618 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.982650 kubelet[2681]: E0516 03:44:43.981753 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.983145 kubelet[2681]: E0516 03:44:43.983024 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.983145 kubelet[2681]: W0516 03:44:43.983038 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.983145 kubelet[2681]: E0516 03:44:43.983053 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.983582 kubelet[2681]: E0516 03:44:43.983416 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.983582 kubelet[2681]: W0516 03:44:43.983429 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.983582 kubelet[2681]: E0516 03:44:43.983442 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.983894 kubelet[2681]: E0516 03:44:43.983880 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.984139 kubelet[2681]: W0516 03:44:43.984010 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.984139 kubelet[2681]: E0516 03:44:43.984027 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.984630 kubelet[2681]: E0516 03:44:43.984457 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.984630 kubelet[2681]: W0516 03:44:43.984470 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.984630 kubelet[2681]: E0516 03:44:43.984480 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.985091 kubelet[2681]: E0516 03:44:43.984942 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.985091 kubelet[2681]: W0516 03:44:43.984958 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.985091 kubelet[2681]: E0516 03:44:43.984967 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.985702 kubelet[2681]: E0516 03:44:43.985491 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.985702 kubelet[2681]: W0516 03:44:43.985503 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.985702 kubelet[2681]: E0516 03:44:43.985516 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.986510 kubelet[2681]: E0516 03:44:43.986116 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.986510 kubelet[2681]: W0516 03:44:43.986129 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.986510 kubelet[2681]: E0516 03:44:43.986142 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.987025 kubelet[2681]: E0516 03:44:43.986796 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.987025 kubelet[2681]: W0516 03:44:43.986825 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.987025 kubelet[2681]: E0516 03:44:43.986835 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.987701 kubelet[2681]: E0516 03:44:43.987506 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.987701 kubelet[2681]: W0516 03:44:43.987519 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.987701 kubelet[2681]: E0516 03:44:43.987535 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.988483 kubelet[2681]: E0516 03:44:43.988131 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.988483 kubelet[2681]: W0516 03:44:43.988141 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.988483 kubelet[2681]: E0516 03:44:43.988150 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.989192 kubelet[2681]: E0516 03:44:43.989155 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.989482 kubelet[2681]: W0516 03:44:43.989291 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.989482 kubelet[2681]: E0516 03:44:43.989321 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:43.991461 kubelet[2681]: E0516 03:44:43.990840 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:43.991461 kubelet[2681]: W0516 03:44:43.990858 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:43.991461 kubelet[2681]: E0516 03:44:43.990874 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:44.012409 kubelet[2681]: E0516 03:44:44.012365 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:44.012409 kubelet[2681]: W0516 03:44:44.012391 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:44.012409 kubelet[2681]: E0516 03:44:44.012411 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:44.021729 containerd[1487]: time="2025-05-16T03:44:44.021633036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwsmq,Uid:0a9d5c0d-7800-4184-b570-09ae6df793f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\"" May 16 03:44:45.566939 kubelet[2681]: E0516 03:44:45.566744 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:45.919529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077239182.mount: Deactivated successfully. May 16 03:44:47.567127 kubelet[2681]: E0516 03:44:47.567029 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:47.570480 containerd[1487]: time="2025-05-16T03:44:47.570422911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:47.571871 containerd[1487]: time="2025-05-16T03:44:47.571733055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 16 03:44:47.572891 containerd[1487]: time="2025-05-16T03:44:47.572855450Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:47.576355 containerd[1487]: time="2025-05-16T03:44:47.576271320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:47.577395 containerd[1487]: time="2025-05-16T03:44:47.576996500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.831384996s" May 16 03:44:47.577395 containerd[1487]: time="2025-05-16T03:44:47.577048188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 16 03:44:47.578599 containerd[1487]: time="2025-05-16T03:44:47.578570635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 03:44:47.605788 containerd[1487]: time="2025-05-16T03:44:47.605706368Z" level=info msg="CreateContainer within sandbox \"ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 03:44:47.622805 containerd[1487]: time="2025-05-16T03:44:47.622657563Z" level=info msg="Container 3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:47.629221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930459412.mount: Deactivated successfully. May 16 03:44:47.639628 containerd[1487]: time="2025-05-16T03:44:47.639564543Z" level=info msg="CreateContainer within sandbox \"ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079\"" May 16 03:44:47.640748 containerd[1487]: time="2025-05-16T03:44:47.640691768Z" level=info msg="StartContainer for \"3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079\"" May 16 03:44:47.643166 containerd[1487]: time="2025-05-16T03:44:47.643071767Z" level=info msg="connecting to shim 3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079" address="unix:///run/containerd/s/49ed2403e64efba8ac230594c2b0739e8f847480ff44468f97f80a53c005978a" protocol=ttrpc version=3 May 16 03:44:47.682624 systemd[1]: Started cri-containerd-3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079.scope - libcontainer container 3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079. May 16 03:44:47.760084 containerd[1487]: time="2025-05-16T03:44:47.759966642Z" level=info msg="StartContainer for \"3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079\" returns successfully" May 16 03:44:48.787903 kubelet[2681]: E0516 03:44:48.787824 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.787903 kubelet[2681]: W0516 03:44:48.787874 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.787903 kubelet[2681]: E0516 03:44:48.787915 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.789266 kubelet[2681]: E0516 03:44:48.788523 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.789266 kubelet[2681]: W0516 03:44:48.788549 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.789266 kubelet[2681]: E0516 03:44:48.788573 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.789266 kubelet[2681]: E0516 03:44:48.789161 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.789266 kubelet[2681]: W0516 03:44:48.789236 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.789266 kubelet[2681]: E0516 03:44:48.789263 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.789966 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.791723 kubelet[2681]: W0516 03:44:48.789990 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.790042 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.790665 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.791723 kubelet[2681]: W0516 03:44:48.790716 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.790762 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.791327 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.791723 kubelet[2681]: W0516 03:44:48.791506 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.791723 kubelet[2681]: E0516 03:44:48.791536 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.792036 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.793974 kubelet[2681]: W0516 03:44:48.792061 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.792142 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.792612 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.793974 kubelet[2681]: W0516 03:44:48.792635 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.792659 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.793113 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.793974 kubelet[2681]: W0516 03:44:48.793137 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.793160 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.793974 kubelet[2681]: E0516 03:44:48.793616 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.795746 kubelet[2681]: W0516 03:44:48.793639 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.793660 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.794040 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.795746 kubelet[2681]: W0516 03:44:48.794064 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.794087 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.794511 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.795746 kubelet[2681]: W0516 03:44:48.794534 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.794556 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.795746 kubelet[2681]: E0516 03:44:48.794915 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.795746 kubelet[2681]: W0516 03:44:48.794936 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.796712 kubelet[2681]: E0516 03:44:48.794958 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.796712 kubelet[2681]: E0516 03:44:48.795503 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.796712 kubelet[2681]: W0516 03:44:48.795527 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.796712 kubelet[2681]: E0516 03:44:48.795549 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.796712 kubelet[2681]: E0516 03:44:48.795929 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.796712 kubelet[2681]: W0516 03:44:48.795952 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.796712 kubelet[2681]: E0516 03:44:48.795973 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.820767 kubelet[2681]: E0516 03:44:48.820704 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.820767 kubelet[2681]: W0516 03:44:48.820747 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.821811 kubelet[2681]: E0516 03:44:48.820778 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.821811 kubelet[2681]: E0516 03:44:48.821219 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.821811 kubelet[2681]: W0516 03:44:48.821241 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.821811 kubelet[2681]: E0516 03:44:48.821265 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.822420 kubelet[2681]: E0516 03:44:48.822381 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.822420 kubelet[2681]: W0516 03:44:48.822412 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.822926 kubelet[2681]: E0516 03:44:48.822439 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.823269 kubelet[2681]: E0516 03:44:48.823201 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.823269 kubelet[2681]: W0516 03:44:48.823236 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.823691 kubelet[2681]: E0516 03:44:48.823294 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.824107 kubelet[2681]: E0516 03:44:48.824031 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.824248 kubelet[2681]: W0516 03:44:48.824111 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.824248 kubelet[2681]: E0516 03:44:48.824138 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.825383 kubelet[2681]: E0516 03:44:48.825278 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.825746 kubelet[2681]: W0516 03:44:48.825504 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.825746 kubelet[2681]: E0516 03:44:48.825729 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.827171 kubelet[2681]: E0516 03:44:48.826487 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.827171 kubelet[2681]: W0516 03:44:48.826513 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.827171 kubelet[2681]: E0516 03:44:48.826537 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.827171 kubelet[2681]: E0516 03:44:48.827101 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.827171 kubelet[2681]: W0516 03:44:48.827125 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.827171 kubelet[2681]: E0516 03:44:48.827147 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.827843 kubelet[2681]: E0516 03:44:48.827796 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.827843 kubelet[2681]: W0516 03:44:48.827821 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.828007 kubelet[2681]: E0516 03:44:48.827890 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.828651 kubelet[2681]: E0516 03:44:48.828609 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.828828 kubelet[2681]: W0516 03:44:48.828644 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.828828 kubelet[2681]: E0516 03:44:48.828818 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.831307 kubelet[2681]: E0516 03:44:48.831257 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.831307 kubelet[2681]: W0516 03:44:48.831298 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.833143 kubelet[2681]: E0516 03:44:48.831326 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.833143 kubelet[2681]: E0516 03:44:48.831991 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.833143 kubelet[2681]: W0516 03:44:48.832016 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.833143 kubelet[2681]: E0516 03:44:48.832039 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.833143 kubelet[2681]: E0516 03:44:48.832653 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.833143 kubelet[2681]: W0516 03:44:48.832677 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.833143 kubelet[2681]: E0516 03:44:48.832699 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.833913 kubelet[2681]: E0516 03:44:48.833532 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.833913 kubelet[2681]: W0516 03:44:48.833557 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.833913 kubelet[2681]: E0516 03:44:48.833579 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.834243 kubelet[2681]: E0516 03:44:48.834179 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.834243 kubelet[2681]: W0516 03:44:48.834222 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.834601 kubelet[2681]: E0516 03:44:48.834246 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.835174 kubelet[2681]: E0516 03:44:48.835128 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.835174 kubelet[2681]: W0516 03:44:48.835166 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.835516 kubelet[2681]: E0516 03:44:48.835471 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.836570 kubelet[2681]: E0516 03:44:48.836506 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.836570 kubelet[2681]: W0516 03:44:48.836553 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.839636 kubelet[2681]: E0516 03:44:48.836578 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:48.839636 kubelet[2681]: E0516 03:44:48.837452 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:48.839636 kubelet[2681]: W0516 03:44:48.837477 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:48.839636 kubelet[2681]: E0516 03:44:48.837500 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.566983 kubelet[2681]: E0516 03:44:49.566915 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:49.692569 containerd[1487]: time="2025-05-16T03:44:49.692261096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:49.694193 containerd[1487]: time="2025-05-16T03:44:49.694125548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 16 03:44:49.695814 containerd[1487]: time="2025-05-16T03:44:49.695751065Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:49.698291 containerd[1487]: time="2025-05-16T03:44:49.698237118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:49.699197 containerd[1487]: time="2025-05-16T03:44:49.699150543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.120526336s" May 16 03:44:49.699269 containerd[1487]: time="2025-05-16T03:44:49.699196490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 03:44:49.708108 containerd[1487]: time="2025-05-16T03:44:49.707996666Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 03:44:49.729378 containerd[1487]: time="2025-05-16T03:44:49.728559786Z" level=info msg="Container b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:49.738128 kubelet[2681]: I0516 03:44:49.738095 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 03:44:49.741697 containerd[1487]: time="2025-05-16T03:44:49.741658175Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\"" May 16 03:44:49.743591 containerd[1487]: time="2025-05-16T03:44:49.742059468Z" level=info msg="StartContainer for \"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\"" May 16 03:44:49.743816 containerd[1487]: time="2025-05-16T03:44:49.743784324Z" level=info msg="connecting to shim b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc" address="unix:///run/containerd/s/669fd5911b464316174b07f75c0a3f868332dc685d6d5478a33277c771302323" protocol=ttrpc version=3 May 16 03:44:49.786629 systemd[1]: Started cri-containerd-b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc.scope - libcontainer container b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc. May 16 03:44:49.801303 kubelet[2681]: E0516 03:44:49.801134 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.801303 kubelet[2681]: W0516 03:44:49.801158 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.801303 kubelet[2681]: E0516 03:44:49.801197 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.802242 kubelet[2681]: E0516 03:44:49.801512 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.802242 kubelet[2681]: W0516 03:44:49.801523 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.802242 kubelet[2681]: E0516 03:44:49.801546 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.802640 kubelet[2681]: E0516 03:44:49.802566 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.802640 kubelet[2681]: W0516 03:44:49.802579 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.802640 kubelet[2681]: E0516 03:44:49.802589 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.803118 kubelet[2681]: E0516 03:44:49.803043 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.803118 kubelet[2681]: W0516 03:44:49.803056 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.803118 kubelet[2681]: E0516 03:44:49.803066 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.803558 kubelet[2681]: E0516 03:44:49.803480 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.803558 kubelet[2681]: W0516 03:44:49.803493 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.803558 kubelet[2681]: E0516 03:44:49.803502 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.804054 kubelet[2681]: E0516 03:44:49.803907 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.804054 kubelet[2681]: W0516 03:44:49.803919 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.804054 kubelet[2681]: E0516 03:44:49.803929 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.804468 kubelet[2681]: E0516 03:44:49.804292 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.804468 kubelet[2681]: W0516 03:44:49.804301 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.804468 kubelet[2681]: E0516 03:44:49.804311 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.804732 kubelet[2681]: E0516 03:44:49.804720 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.804884 kubelet[2681]: W0516 03:44:49.804764 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.804884 kubelet[2681]: E0516 03:44:49.804776 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.805219 kubelet[2681]: E0516 03:44:49.805140 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.805219 kubelet[2681]: W0516 03:44:49.805152 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.805219 kubelet[2681]: E0516 03:44:49.805162 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.805768 kubelet[2681]: E0516 03:44:49.805634 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.805768 kubelet[2681]: W0516 03:44:49.805646 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.805768 kubelet[2681]: E0516 03:44:49.805655 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.806040 kubelet[2681]: E0516 03:44:49.805970 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.806040 kubelet[2681]: W0516 03:44:49.805983 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.806040 kubelet[2681]: E0516 03:44:49.805993 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.806485 kubelet[2681]: E0516 03:44:49.806390 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.806485 kubelet[2681]: W0516 03:44:49.806402 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.806485 kubelet[2681]: E0516 03:44:49.806412 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.807101 kubelet[2681]: E0516 03:44:49.806953 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.807101 kubelet[2681]: W0516 03:44:49.806965 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.807101 kubelet[2681]: E0516 03:44:49.806975 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.807481 kubelet[2681]: E0516 03:44:49.807387 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.807481 kubelet[2681]: W0516 03:44:49.807400 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.807481 kubelet[2681]: E0516 03:44:49.807411 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.807855 kubelet[2681]: E0516 03:44:49.807768 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.807855 kubelet[2681]: W0516 03:44:49.807780 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.807855 kubelet[2681]: E0516 03:44:49.807789 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.835386 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838445 kubelet[2681]: W0516 03:44:49.835461 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.835519 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.835899 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838445 kubelet[2681]: W0516 03:44:49.835941 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.835953 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.836414 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838445 kubelet[2681]: W0516 03:44:49.836445 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.836471 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838445 kubelet[2681]: E0516 03:44:49.836779 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838837 kubelet[2681]: W0516 03:44:49.836790 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838837 kubelet[2681]: E0516 03:44:49.836801 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838837 kubelet[2681]: E0516 03:44:49.837330 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838837 kubelet[2681]: W0516 03:44:49.837418 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838837 kubelet[2681]: E0516 03:44:49.837431 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.838837 kubelet[2681]: E0516 03:44:49.837657 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.838837 kubelet[2681]: W0516 03:44:49.837672 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.838837 kubelet[2681]: E0516 03:44:49.837684 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.841962 kubelet[2681]: E0516 03:44:49.841250 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.841962 kubelet[2681]: W0516 03:44:49.841267 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.841962 kubelet[2681]: E0516 03:44:49.841406 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.843002 kubelet[2681]: E0516 03:44:49.842970 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.843190 kubelet[2681]: W0516 03:44:49.843086 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.843190 kubelet[2681]: E0516 03:44:49.843106 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.843706 kubelet[2681]: E0516 03:44:49.843579 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.843706 kubelet[2681]: W0516 03:44:49.843592 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.843706 kubelet[2681]: E0516 03:44:49.843602 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.844085 kubelet[2681]: E0516 03:44:49.843959 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.844085 kubelet[2681]: W0516 03:44:49.843981 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.844085 kubelet[2681]: E0516 03:44:49.843992 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.844430 kubelet[2681]: E0516 03:44:49.844309 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.844430 kubelet[2681]: W0516 03:44:49.844322 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.844430 kubelet[2681]: E0516 03:44:49.844332 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.844921 kubelet[2681]: E0516 03:44:49.844854 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.844921 kubelet[2681]: W0516 03:44:49.844867 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.844921 kubelet[2681]: E0516 03:44:49.844876 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.845287 kubelet[2681]: E0516 03:44:49.845176 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.845287 kubelet[2681]: W0516 03:44:49.845188 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.845287 kubelet[2681]: E0516 03:44:49.845197 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.845768 kubelet[2681]: E0516 03:44:49.845645 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.846023 kubelet[2681]: W0516 03:44:49.845719 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.846271 kubelet[2681]: E0516 03:44:49.846180 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.847014 kubelet[2681]: E0516 03:44:49.846793 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.847014 kubelet[2681]: W0516 03:44:49.846810 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.847014 kubelet[2681]: E0516 03:44:49.846820 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.847181 containerd[1487]: time="2025-05-16T03:44:49.846790562Z" level=info msg="StartContainer for \"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\" returns successfully" May 16 03:44:49.848777 kubelet[2681]: E0516 03:44:49.848759 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.849203 kubelet[2681]: W0516 03:44:49.849056 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.849203 kubelet[2681]: E0516 03:44:49.849133 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.850966 kubelet[2681]: E0516 03:44:49.850518 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.850966 kubelet[2681]: W0516 03:44:49.850547 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.850966 kubelet[2681]: E0516 03:44:49.850571 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.850966 kubelet[2681]: E0516 03:44:49.850761 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 03:44:49.850966 kubelet[2681]: W0516 03:44:49.850771 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 03:44:49.850966 kubelet[2681]: E0516 03:44:49.850781 2681 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 03:44:49.855661 systemd[1]: cri-containerd-b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc.scope: Deactivated successfully. May 16 03:44:49.861509 containerd[1487]: time="2025-05-16T03:44:49.861396235Z" level=info msg="received exit event container_id:\"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\" id:\"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\" pid:3357 exited_at:{seconds:1747367089 nanos:860572021}" May 16 03:44:49.861729 containerd[1487]: time="2025-05-16T03:44:49.861465217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\" id:\"b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc\" pid:3357 exited_at:{seconds:1747367089 nanos:860572021}" May 16 03:44:49.900441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc-rootfs.mount: Deactivated successfully. May 16 03:44:50.814197 kubelet[2681]: I0516 03:44:50.814068 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c6448fc4-rxzgn" podStartSLOduration=3.979904446 podStartE2EDuration="7.814038614s" podCreationTimestamp="2025-05-16 03:44:43 +0000 UTC" firstStartedPulling="2025-05-16 03:44:43.744310637 +0000 UTC m=+21.387809505" lastFinishedPulling="2025-05-16 03:44:47.578444795 +0000 UTC m=+25.221943673" observedRunningTime="2025-05-16 03:44:48.763112485 +0000 UTC m=+26.406611433" watchObservedRunningTime="2025-05-16 03:44:50.814038614 +0000 UTC m=+28.457537532" May 16 03:44:51.567928 kubelet[2681]: E0516 03:44:51.567780 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:51.769680 containerd[1487]: time="2025-05-16T03:44:51.769582082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 03:44:52.654784 kubelet[2681]: I0516 03:44:52.654081 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 03:44:53.566736 kubelet[2681]: E0516 03:44:53.566666 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:55.566706 kubelet[2681]: E0516 03:44:55.566623 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:56.968671 containerd[1487]: time="2025-05-16T03:44:56.968598371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:56.970267 containerd[1487]: time="2025-05-16T03:44:56.970102746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 03:44:56.971637 containerd[1487]: time="2025-05-16T03:44:56.971600327Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:56.975511 containerd[1487]: time="2025-05-16T03:44:56.975183793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:44:56.976067 containerd[1487]: time="2025-05-16T03:44:56.976033880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 5.206367658s" May 16 03:44:56.976153 containerd[1487]: time="2025-05-16T03:44:56.976071722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 03:44:56.986491 containerd[1487]: time="2025-05-16T03:44:56.986438272Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 03:44:57.001360 containerd[1487]: time="2025-05-16T03:44:56.999657867Z" level=info msg="Container 36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946: CDI devices from CRI Config.CDIDevices: []" May 16 03:44:57.024148 containerd[1487]: time="2025-05-16T03:44:57.023998519Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\"" May 16 03:44:57.026898 containerd[1487]: time="2025-05-16T03:44:57.026374879Z" level=info msg="StartContainer for \"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\"" May 16 03:44:57.029365 containerd[1487]: time="2025-05-16T03:44:57.029290349Z" level=info msg="connecting to shim 36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946" address="unix:///run/containerd/s/669fd5911b464316174b07f75c0a3f868332dc685d6d5478a33277c771302323" protocol=ttrpc version=3 May 16 03:44:57.066618 systemd[1]: Started cri-containerd-36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946.scope - libcontainer container 36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946. May 16 03:44:57.127664 containerd[1487]: time="2025-05-16T03:44:57.127071848Z" level=info msg="StartContainer for \"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\" returns successfully" May 16 03:44:57.570433 kubelet[2681]: E0516 03:44:57.567902 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:58.618918 containerd[1487]: time="2025-05-16T03:44:58.618761525Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 03:44:58.625891 systemd[1]: cri-containerd-36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946.scope: Deactivated successfully. May 16 03:44:58.626645 systemd[1]: cri-containerd-36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946.scope: Consumed 868ms CPU time, 190.9M memory peak, 170.9M written to disk. May 16 03:44:58.636794 containerd[1487]: time="2025-05-16T03:44:58.636725679Z" level=info msg="received exit event container_id:\"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\" id:\"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\" pid:3449 exited_at:{seconds:1747367098 nanos:636260119}" May 16 03:44:58.639391 containerd[1487]: time="2025-05-16T03:44:58.637893505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\" id:\"36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946\" pid:3449 exited_at:{seconds:1747367098 nanos:636260119}" May 16 03:44:58.669853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946-rootfs.mount: Deactivated successfully. May 16 03:44:58.737395 kubelet[2681]: I0516 03:44:58.737224 2681 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 03:44:59.467327 systemd[1]: Created slice kubepods-burstable-podd01d1078_b043_4276_a3e9_cede61f0e64b.slice - libcontainer container kubepods-burstable-podd01d1078_b043_4276_a3e9_cede61f0e64b.slice. May 16 03:44:59.496456 systemd[1]: Created slice kubepods-burstable-podede9dc66_0a0b_4839_a3fa_93460a933576.slice - libcontainer container kubepods-burstable-podede9dc66_0a0b_4839_a3fa_93460a933576.slice. May 16 03:44:59.508717 systemd[1]: Created slice kubepods-besteffort-pod67ef62ce_b02f_4a5b_bac2_8712ad130e81.slice - libcontainer container kubepods-besteffort-pod67ef62ce_b02f_4a5b_bac2_8712ad130e81.slice. May 16 03:44:59.517725 kubelet[2681]: I0516 03:44:59.517682 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c45bc411-b957-4b16-8a3d-04432d83d3b5-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-2l87g\" (UID: \"c45bc411-b957-4b16-8a3d-04432d83d3b5\") " pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:44:59.517725 kubelet[2681]: I0516 03:44:59.517729 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f88d5892-6c0a-4970-a316-70b9e1bedfbf-calico-apiserver-certs\") pod \"calico-apiserver-54bddd6c9b-n5m5g\" (UID: \"f88d5892-6c0a-4970-a316-70b9e1bedfbf\") " pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" May 16 03:44:59.518697 kubelet[2681]: I0516 03:44:59.517753 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnlm\" (UniqueName: \"kubernetes.io/projected/f88d5892-6c0a-4970-a316-70b9e1bedfbf-kube-api-access-cgnlm\") pod \"calico-apiserver-54bddd6c9b-n5m5g\" (UID: \"f88d5892-6c0a-4970-a316-70b9e1bedfbf\") " pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" May 16 03:44:59.518697 kubelet[2681]: I0516 03:44:59.517772 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-ca-bundle\") pod \"whisker-698669cd56-mg8lh\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:44:59.518697 kubelet[2681]: I0516 03:44:59.517792 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d01d1078-b043-4276-a3e9-cede61f0e64b-config-volume\") pod \"coredns-674b8bbfcf-9nrc7\" (UID: \"d01d1078-b043-4276-a3e9-cede61f0e64b\") " pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:44:59.518697 kubelet[2681]: I0516 03:44:59.517811 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5gs2\" (UniqueName: \"kubernetes.io/projected/d01d1078-b043-4276-a3e9-cede61f0e64b-kube-api-access-l5gs2\") pod \"coredns-674b8bbfcf-9nrc7\" (UID: \"d01d1078-b043-4276-a3e9-cede61f0e64b\") " pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:44:59.518697 kubelet[2681]: I0516 03:44:59.517829 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45bc411-b957-4b16-8a3d-04432d83d3b5-config\") pod \"goldmane-78d55f7ddc-2l87g\" (UID: \"c45bc411-b957-4b16-8a3d-04432d83d3b5\") " pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:44:59.518940 kubelet[2681]: I0516 03:44:59.517848 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c45bc411-b957-4b16-8a3d-04432d83d3b5-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-2l87g\" (UID: \"c45bc411-b957-4b16-8a3d-04432d83d3b5\") " pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:44:59.518940 kubelet[2681]: I0516 03:44:59.517868 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f34dc699-7d69-4b18-87db-f8c213550070-calico-apiserver-certs\") pod \"calico-apiserver-54bddd6c9b-hrzzl\" (UID: \"f34dc699-7d69-4b18-87db-f8c213550070\") " pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:44:59.518940 kubelet[2681]: I0516 03:44:59.517892 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ede9dc66-0a0b-4839-a3fa-93460a933576-config-volume\") pod \"coredns-674b8bbfcf-pd7wp\" (UID: \"ede9dc66-0a0b-4839-a3fa-93460a933576\") " pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:44:59.518940 kubelet[2681]: I0516 03:44:59.517910 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrk5x\" (UniqueName: \"kubernetes.io/projected/ede9dc66-0a0b-4839-a3fa-93460a933576-kube-api-access-xrk5x\") pod \"coredns-674b8bbfcf-pd7wp\" (UID: \"ede9dc66-0a0b-4839-a3fa-93460a933576\") " pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:44:59.518940 kubelet[2681]: I0516 03:44:59.517928 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67ef62ce-b02f-4a5b-bac2-8712ad130e81-tigera-ca-bundle\") pod \"calico-kube-controllers-84596d4c7-k2qxb\" (UID: \"67ef62ce-b02f-4a5b-bac2-8712ad130e81\") " pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:44:59.519360 kubelet[2681]: I0516 03:44:59.517961 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-backend-key-pair\") pod \"whisker-698669cd56-mg8lh\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:44:59.519360 kubelet[2681]: I0516 03:44:59.517981 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsv2r\" (UniqueName: \"kubernetes.io/projected/f34dc699-7d69-4b18-87db-f8c213550070-kube-api-access-vsv2r\") pod \"calico-apiserver-54bddd6c9b-hrzzl\" (UID: \"f34dc699-7d69-4b18-87db-f8c213550070\") " pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:44:59.519360 kubelet[2681]: I0516 03:44:59.518009 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspgm\" (UniqueName: \"kubernetes.io/projected/81d5818d-511b-4617-9b37-6cbbb63bf4a0-kube-api-access-gspgm\") pod \"whisker-698669cd56-mg8lh\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:44:59.519360 kubelet[2681]: I0516 03:44:59.518033 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5hn\" (UniqueName: \"kubernetes.io/projected/c45bc411-b957-4b16-8a3d-04432d83d3b5-kube-api-access-wr5hn\") pod \"goldmane-78d55f7ddc-2l87g\" (UID: \"c45bc411-b957-4b16-8a3d-04432d83d3b5\") " pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:44:59.519360 kubelet[2681]: I0516 03:44:59.518082 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg298\" (UniqueName: \"kubernetes.io/projected/67ef62ce-b02f-4a5b-bac2-8712ad130e81-kube-api-access-xg298\") pod \"calico-kube-controllers-84596d4c7-k2qxb\" (UID: \"67ef62ce-b02f-4a5b-bac2-8712ad130e81\") " pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:44:59.521936 systemd[1]: Created slice kubepods-besteffort-podf88d5892_6c0a_4970_a316_70b9e1bedfbf.slice - libcontainer container kubepods-besteffort-podf88d5892_6c0a_4970_a316_70b9e1bedfbf.slice. May 16 03:44:59.534178 systemd[1]: Created slice kubepods-besteffort-podc45bc411_b957_4b16_8a3d_04432d83d3b5.slice - libcontainer container kubepods-besteffort-podc45bc411_b957_4b16_8a3d_04432d83d3b5.slice. May 16 03:44:59.544905 systemd[1]: Created slice kubepods-besteffort-pod81d5818d_511b_4617_9b37_6cbbb63bf4a0.slice - libcontainer container kubepods-besteffort-pod81d5818d_511b_4617_9b37_6cbbb63bf4a0.slice. May 16 03:44:59.551109 systemd[1]: Created slice kubepods-besteffort-podf34dc699_7d69_4b18_87db_f8c213550070.slice - libcontainer container kubepods-besteffort-podf34dc699_7d69_4b18_87db_f8c213550070.slice. May 16 03:44:59.578279 systemd[1]: Created slice kubepods-besteffort-pod2d89f57b_12d9_441c_854f_90be519acbd7.slice - libcontainer container kubepods-besteffort-pod2d89f57b_12d9_441c_854f_90be519acbd7.slice. May 16 03:44:59.581822 containerd[1487]: time="2025-05-16T03:44:59.581768118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkgqf,Uid:2d89f57b-12d9-441c-854f-90be519acbd7,Namespace:calico-system,Attempt:0,}" May 16 03:44:59.736768 containerd[1487]: time="2025-05-16T03:44:59.736413550Z" level=error msg="Failed to destroy network for sandbox \"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.740686 containerd[1487]: time="2025-05-16T03:44:59.740600616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkgqf,Uid:2d89f57b-12d9-441c-854f-90be519acbd7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.741272 kubelet[2681]: E0516 03:44:59.741012 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.741272 kubelet[2681]: E0516 03:44:59.741107 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:59.741272 kubelet[2681]: E0516 03:44:59.741142 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkgqf" May 16 03:44:59.743450 kubelet[2681]: E0516 03:44:59.741212 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14f902387fb3cf27f9a3deb3fec3713e7af3e067fceee18040c109fdc4655702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:44:59.781125 containerd[1487]: time="2025-05-16T03:44:59.781048895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9nrc7,Uid:d01d1078-b043-4276-a3e9-cede61f0e64b,Namespace:kube-system,Attempt:0,}" May 16 03:44:59.810754 containerd[1487]: time="2025-05-16T03:44:59.810110476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pd7wp,Uid:ede9dc66-0a0b-4839-a3fa-93460a933576,Namespace:kube-system,Attempt:0,}" May 16 03:44:59.813786 containerd[1487]: time="2025-05-16T03:44:59.813495497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84596d4c7-k2qxb,Uid:67ef62ce-b02f-4a5b-bac2-8712ad130e81,Namespace:calico-system,Attempt:0,}" May 16 03:44:59.830534 containerd[1487]: time="2025-05-16T03:44:59.830334411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 03:44:59.831965 containerd[1487]: time="2025-05-16T03:44:59.831418727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-n5m5g,Uid:f88d5892-6c0a-4970-a316-70b9e1bedfbf,Namespace:calico-apiserver,Attempt:0,}" May 16 03:44:59.841973 containerd[1487]: time="2025-05-16T03:44:59.841913051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2l87g,Uid:c45bc411-b957-4b16-8a3d-04432d83d3b5,Namespace:calico-system,Attempt:0,}" May 16 03:44:59.851731 containerd[1487]: time="2025-05-16T03:44:59.851504601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698669cd56-mg8lh,Uid:81d5818d-511b-4617-9b37-6cbbb63bf4a0,Namespace:calico-system,Attempt:0,}" May 16 03:44:59.856157 containerd[1487]: time="2025-05-16T03:44:59.855891615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-hrzzl,Uid:f34dc699-7d69-4b18-87db-f8c213550070,Namespace:calico-apiserver,Attempt:0,}" May 16 03:44:59.992073 containerd[1487]: time="2025-05-16T03:44:59.991599792Z" level=error msg="Failed to destroy network for sandbox \"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.996711 containerd[1487]: time="2025-05-16T03:44:59.996513359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pd7wp,Uid:ede9dc66-0a0b-4839-a3fa-93460a933576,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.997006 kubelet[2681]: E0516 03:44:59.996794 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:44:59.997006 kubelet[2681]: E0516 03:44:59.996864 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:44:59.997006 kubelet[2681]: E0516 03:44:59.996893 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:44:59.998021 kubelet[2681]: E0516 03:44:59.996948 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42c7140453a15573b6dcb1c01b496f2e80e5e6dbbbf8866bf7b57e46c4650f2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pd7wp" podUID="ede9dc66-0a0b-4839-a3fa-93460a933576" May 16 03:45:00.046189 containerd[1487]: time="2025-05-16T03:45:00.045949393Z" level=error msg="Failed to destroy network for sandbox \"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.051569 containerd[1487]: time="2025-05-16T03:45:00.051368301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9nrc7,Uid:d01d1078-b043-4276-a3e9-cede61f0e64b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.052161 kubelet[2681]: E0516 03:45:00.051714 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.052161 kubelet[2681]: E0516 03:45:00.051817 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:45:00.052161 kubelet[2681]: E0516 03:45:00.051856 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:45:00.054596 kubelet[2681]: E0516 03:45:00.051958 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9nrc7_kube-system(d01d1078-b043-4276-a3e9-cede61f0e64b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9nrc7_kube-system(d01d1078-b043-4276-a3e9-cede61f0e64b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a256b630569c30a009428a61ab6413e140037857ff9b7bd161159611d63242c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9nrc7" podUID="d01d1078-b043-4276-a3e9-cede61f0e64b" May 16 03:45:00.066586 containerd[1487]: time="2025-05-16T03:45:00.066529945Z" level=error msg="Failed to destroy network for sandbox \"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.070507 containerd[1487]: time="2025-05-16T03:45:00.070444763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698669cd56-mg8lh,Uid:81d5818d-511b-4617-9b37-6cbbb63bf4a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.070792 kubelet[2681]: E0516 03:45:00.070711 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.070792 kubelet[2681]: E0516 03:45:00.070770 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:45:00.070792 kubelet[2681]: E0516 03:45:00.070795 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:45:00.072720 kubelet[2681]: E0516 03:45:00.070851 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-698669cd56-mg8lh_calico-system(81d5818d-511b-4617-9b37-6cbbb63bf4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-698669cd56-mg8lh_calico-system(81d5818d-511b-4617-9b37-6cbbb63bf4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f618ca9b1268834ba0d660d2290d3f79cc13cebf0ee21639d397ab1f4f1f42ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-698669cd56-mg8lh" podUID="81d5818d-511b-4617-9b37-6cbbb63bf4a0" May 16 03:45:00.100202 containerd[1487]: time="2025-05-16T03:45:00.099803562Z" level=error msg="Failed to destroy network for sandbox \"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.103933 containerd[1487]: time="2025-05-16T03:45:00.103858885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84596d4c7-k2qxb,Uid:67ef62ce-b02f-4a5b-bac2-8712ad130e81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.106223 kubelet[2681]: E0516 03:45:00.104091 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.106223 kubelet[2681]: E0516 03:45:00.104148 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:45:00.106223 kubelet[2681]: E0516 03:45:00.104171 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:45:00.107737 kubelet[2681]: E0516 03:45:00.104226 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84596d4c7-k2qxb_calico-system(67ef62ce-b02f-4a5b-bac2-8712ad130e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84596d4c7-k2qxb_calico-system(67ef62ce-b02f-4a5b-bac2-8712ad130e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ba9afc430f4a7160d8370f66d140e535c0cf7e46061ce71cd628f7b8778342\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" podUID="67ef62ce-b02f-4a5b-bac2-8712ad130e81" May 16 03:45:00.116285 containerd[1487]: time="2025-05-16T03:45:00.116225144Z" level=error msg="Failed to destroy network for sandbox \"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.116681 containerd[1487]: time="2025-05-16T03:45:00.116641600Z" level=error msg="Failed to destroy network for sandbox \"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.119827 containerd[1487]: time="2025-05-16T03:45:00.119771858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-n5m5g,Uid:f88d5892-6c0a-4970-a316-70b9e1bedfbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.120493 kubelet[2681]: E0516 03:45:00.120428 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.120683 kubelet[2681]: E0516 03:45:00.120617 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" May 16 03:45:00.120683 kubelet[2681]: E0516 03:45:00.120663 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" May 16 03:45:00.122062 kubelet[2681]: E0516 03:45:00.120732 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bddd6c9b-n5m5g_calico-apiserver(f88d5892-6c0a-4970-a316-70b9e1bedfbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bddd6c9b-n5m5g_calico-apiserver(f88d5892-6c0a-4970-a316-70b9e1bedfbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d6bd45ae22e0a0b8ba8a845aa23ac1cdb941fa8fb2ed4ed7765db27a3635bbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" podUID="f88d5892-6c0a-4970-a316-70b9e1bedfbf" May 16 03:45:00.122062 kubelet[2681]: E0516 03:45:00.121817 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.122062 kubelet[2681]: E0516 03:45:00.122003 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:45:00.122294 containerd[1487]: time="2025-05-16T03:45:00.121173733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2l87g,Uid:c45bc411-b957-4b16-8a3d-04432d83d3b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.122386 kubelet[2681]: E0516 03:45:00.122042 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:45:00.122386 kubelet[2681]: E0516 03:45:00.122234 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-2l87g_calico-system(c45bc411-b957-4b16-8a3d-04432d83d3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-2l87g_calico-system(c45bc411-b957-4b16-8a3d-04432d83d3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd5dd5f7da427ab34e7b15e7ef8e71c0ba2f0565ea48d7299a04d9812ae75abd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-2l87g" podUID="c45bc411-b957-4b16-8a3d-04432d83d3b5" May 16 03:45:00.129383 containerd[1487]: time="2025-05-16T03:45:00.129191906Z" level=error msg="Failed to destroy network for sandbox \"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.131078 containerd[1487]: time="2025-05-16T03:45:00.131015447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-hrzzl,Uid:f34dc699-7d69-4b18-87db-f8c213550070,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.131317 kubelet[2681]: E0516 03:45:00.131280 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:00.131436 kubelet[2681]: E0516 03:45:00.131361 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:45:00.131436 kubelet[2681]: E0516 03:45:00.131389 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:45:00.131531 kubelet[2681]: E0516 03:45:00.131441 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48b4dfea9fa4878a0b2e96a83e24131723a570b1b653307205540813c4c178a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" podUID="f34dc699-7d69-4b18-87db-f8c213550070" May 16 03:45:00.681805 systemd[1]: run-netns-cni\x2d47a952a3\x2d0c87\x2d69fd\x2d2462\x2de323f4376db2.mount: Deactivated successfully. May 16 03:45:10.136288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403531680.mount: Deactivated successfully. May 16 03:45:10.580640 containerd[1487]: time="2025-05-16T03:45:10.579267784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkgqf,Uid:2d89f57b-12d9-441c-854f-90be519acbd7,Namespace:calico-system,Attempt:0,}" May 16 03:45:10.590769 containerd[1487]: time="2025-05-16T03:45:10.589554691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pd7wp,Uid:ede9dc66-0a0b-4839-a3fa-93460a933576,Namespace:kube-system,Attempt:0,}" May 16 03:45:10.594387 containerd[1487]: time="2025-05-16T03:45:10.591539576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698669cd56-mg8lh,Uid:81d5818d-511b-4617-9b37-6cbbb63bf4a0,Namespace:calico-system,Attempt:0,}" May 16 03:45:10.851255 containerd[1487]: time="2025-05-16T03:45:10.850869834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:10.859376 containerd[1487]: time="2025-05-16T03:45:10.857966287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 03:45:10.884395 containerd[1487]: time="2025-05-16T03:45:10.883624738Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:10.913077 containerd[1487]: time="2025-05-16T03:45:10.913014670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:10.919168 containerd[1487]: time="2025-05-16T03:45:10.919122564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 11.088680921s" May 16 03:45:10.919983 containerd[1487]: time="2025-05-16T03:45:10.919958326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 03:45:11.005396 containerd[1487]: time="2025-05-16T03:45:11.005285323Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 03:45:11.048116 containerd[1487]: time="2025-05-16T03:45:11.048071076Z" level=info msg="Container 3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d: CDI devices from CRI Config.CDIDevices: []" May 16 03:45:11.080750 containerd[1487]: time="2025-05-16T03:45:11.080696344Z" level=info msg="CreateContainer within sandbox \"cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\"" May 16 03:45:11.082072 containerd[1487]: time="2025-05-16T03:45:11.082035774Z" level=info msg="StartContainer for \"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\"" May 16 03:45:11.088084 containerd[1487]: time="2025-05-16T03:45:11.088043205Z" level=info msg="connecting to shim 3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d" address="unix:///run/containerd/s/669fd5911b464316174b07f75c0a3f868332dc685d6d5478a33277c771302323" protocol=ttrpc version=3 May 16 03:45:11.093042 containerd[1487]: time="2025-05-16T03:45:11.092990723Z" level=error msg="Failed to destroy network for sandbox \"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.097090 containerd[1487]: time="2025-05-16T03:45:11.096710671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pd7wp,Uid:ede9dc66-0a0b-4839-a3fa-93460a933576,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.098739 kubelet[2681]: E0516 03:45:11.097578 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.098739 kubelet[2681]: E0516 03:45:11.097723 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:45:11.098739 kubelet[2681]: E0516 03:45:11.097785 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:45:11.099859 kubelet[2681]: E0516 03:45:11.099179 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"321167132c259a35a7f5f96333e60f7e24329f3327c715bcf4e67daa74473506\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pd7wp" podUID="ede9dc66-0a0b-4839-a3fa-93460a933576" May 16 03:45:11.125680 containerd[1487]: time="2025-05-16T03:45:11.125275490Z" level=error msg="Failed to destroy network for sandbox \"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.127591 containerd[1487]: time="2025-05-16T03:45:11.127551432Z" level=error msg="Failed to destroy network for sandbox \"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.130547 containerd[1487]: time="2025-05-16T03:45:11.130237475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkgqf,Uid:2d89f57b-12d9-441c-854f-90be519acbd7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.131257 kubelet[2681]: E0516 03:45:11.130729 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.131257 kubelet[2681]: E0516 03:45:11.130816 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkgqf" May 16 03:45:11.131257 kubelet[2681]: E0516 03:45:11.130865 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nkgqf" May 16 03:45:11.131900 kubelet[2681]: E0516 03:45:11.130931 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da49eeecef3c24d3de662c10ba7c0a9e9b0595b2030a550635a9e0983d40da05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:45:11.136777 systemd[1]: run-netns-cni\x2dd4cffc22\x2d14d5\x2d61a4\x2dee5c\x2d37b8fa469a52.mount: Deactivated successfully. May 16 03:45:11.140185 containerd[1487]: time="2025-05-16T03:45:11.139637801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698669cd56-mg8lh,Uid:81d5818d-511b-4617-9b37-6cbbb63bf4a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.136910 systemd[1]: run-netns-cni\x2d0e4742ef\x2d535a\x2d564f\x2da17d\x2d696d54870fb0.mount: Deactivated successfully. May 16 03:45:11.137006 systemd[1]: run-netns-cni\x2db9e1b794\x2dbe25\x2d354c\x2d241e\x2df5772054b30a.mount: Deactivated successfully. May 16 03:45:11.142684 kubelet[2681]: E0516 03:45:11.140967 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 03:45:11.142684 kubelet[2681]: E0516 03:45:11.141038 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:45:11.142684 kubelet[2681]: E0516 03:45:11.141064 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698669cd56-mg8lh" May 16 03:45:11.142856 kubelet[2681]: E0516 03:45:11.141137 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-698669cd56-mg8lh_calico-system(81d5818d-511b-4617-9b37-6cbbb63bf4a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-698669cd56-mg8lh_calico-system(81d5818d-511b-4617-9b37-6cbbb63bf4a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9fbf9e62a6e229c9c1478c03de938311466199a4bd9a9ba10bc5a1fccb73669\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-698669cd56-mg8lh" podUID="81d5818d-511b-4617-9b37-6cbbb63bf4a0" May 16 03:45:11.171521 systemd[1]: Started cri-containerd-3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d.scope - libcontainer container 3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d. May 16 03:45:11.235510 containerd[1487]: time="2025-05-16T03:45:11.235453688Z" level=info msg="StartContainer for \"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" returns successfully" May 16 03:45:11.394850 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 03:45:11.395079 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 03:45:11.569018 containerd[1487]: time="2025-05-16T03:45:11.568973027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-n5m5g,Uid:f88d5892-6c0a-4970-a316-70b9e1bedfbf,Namespace:calico-apiserver,Attempt:0,}" May 16 03:45:11.906878 systemd-networkd[1398]: cali77d3f365030: Link UP May 16 03:45:11.908747 systemd-networkd[1398]: cali77d3f365030: Gained carrier May 16 03:45:11.947967 kubelet[2681]: I0516 03:45:11.946807 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dwsmq" podStartSLOduration=2.050317503 podStartE2EDuration="28.946728809s" podCreationTimestamp="2025-05-16 03:44:43 +0000 UTC" firstStartedPulling="2025-05-16 03:44:44.026179213 +0000 UTC m=+21.669678081" lastFinishedPulling="2025-05-16 03:45:10.922590529 +0000 UTC m=+48.566089387" observedRunningTime="2025-05-16 03:45:11.940523214 +0000 UTC m=+49.584022102" watchObservedRunningTime="2025-05-16 03:45:11.946728809 +0000 UTC m=+49.590227667" May 16 03:45:11.949702 containerd[1487]: 2025-05-16 03:45:11.637 [INFO][3830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 03:45:11.949702 containerd[1487]: 2025-05-16 03:45:11.716 [INFO][3830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0 calico-apiserver-54bddd6c9b- calico-apiserver f88d5892-6c0a-4970-a316-70b9e1bedfbf 864 0 2025-05-16 03:44:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bddd6c9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-n-184e873f92.novalocal calico-apiserver-54bddd6c9b-n5m5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77d3f365030 [] [] }} ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-" May 16 03:45:11.949702 containerd[1487]: 2025-05-16 03:45:11.718 [INFO][3830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.949702 containerd[1487]: 2025-05-16 03:45:11.775 [INFO][3841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" HandleID="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.775 [INFO][3841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" HandleID="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-n-184e873f92.novalocal", "pod":"calico-apiserver-54bddd6c9b-n5m5g", "timestamp":"2025-05-16 03:45:11.775222894 +0000 UTC"}, Hostname:"ci-4284-0-0-n-184e873f92.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.776 [INFO][3841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.776 [INFO][3841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.776 [INFO][3841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-184e873f92.novalocal' May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.824 [INFO][3841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.838 [INFO][3841] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.845 [INFO][3841] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.847 [INFO][3841] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.950326 containerd[1487]: 2025-05-16 03:45:11.850 [INFO][3841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.850 [INFO][3841] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.853 [INFO][3841] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777 May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.858 [INFO][3841] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.868 [INFO][3841] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.122.1/26] block=192.168.122.0/26 handle="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.868 [INFO][3841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.1/26] handle="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.868 [INFO][3841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 03:45:11.957150 containerd[1487]: 2025-05-16 03:45:11.868 [INFO][3841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.1/26] IPv6=[] ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" HandleID="k8s-pod-network.2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.957655 containerd[1487]: 2025-05-16 03:45:11.881 [INFO][3830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0", GenerateName:"calico-apiserver-54bddd6c9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f88d5892-6c0a-4970-a316-70b9e1bedfbf", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bddd6c9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"", Pod:"calico-apiserver-54bddd6c9b-n5m5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d3f365030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:11.957756 containerd[1487]: 2025-05-16 03:45:11.881 [INFO][3830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.1/32] ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.957756 containerd[1487]: 2025-05-16 03:45:11.881 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77d3f365030 ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.957756 containerd[1487]: 2025-05-16 03:45:11.910 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:11.957878 containerd[1487]: 2025-05-16 03:45:11.910 [INFO][3830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0", GenerateName:"calico-apiserver-54bddd6c9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f88d5892-6c0a-4970-a316-70b9e1bedfbf", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bddd6c9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777", Pod:"calico-apiserver-54bddd6c9b-n5m5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d3f365030", MAC:"1e:7b:14:9d:00:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:11.957959 containerd[1487]: 2025-05-16 03:45:11.938 [INFO][3830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-n5m5g" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--n5m5g-eth0" May 16 03:45:12.027133 kubelet[2681]: I0516 03:45:12.027070 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-ca-bundle\") pod \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " May 16 03:45:12.027133 kubelet[2681]: I0516 03:45:12.027126 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-backend-key-pair\") pod \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " May 16 03:45:12.028332 kubelet[2681]: I0516 03:45:12.027172 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gspgm\" (UniqueName: \"kubernetes.io/projected/81d5818d-511b-4617-9b37-6cbbb63bf4a0-kube-api-access-gspgm\") pod \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\" (UID: \"81d5818d-511b-4617-9b37-6cbbb63bf4a0\") " May 16 03:45:12.029884 kubelet[2681]: I0516 03:45:12.029628 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "81d5818d-511b-4617-9b37-6cbbb63bf4a0" (UID: "81d5818d-511b-4617-9b37-6cbbb63bf4a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 03:45:12.036081 containerd[1487]: time="2025-05-16T03:45:12.034313759Z" level=info msg="connecting to shim 2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777" address="unix:///run/containerd/s/ff40aa2050a18d455aa8cd68a6ebcb084ad1968254770c182cc6838319557848" namespace=k8s.io protocol=ttrpc version=3 May 16 03:45:12.043632 kubelet[2681]: I0516 03:45:12.043587 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d5818d-511b-4617-9b37-6cbbb63bf4a0-kube-api-access-gspgm" (OuterVolumeSpecName: "kube-api-access-gspgm") pod "81d5818d-511b-4617-9b37-6cbbb63bf4a0" (UID: "81d5818d-511b-4617-9b37-6cbbb63bf4a0"). InnerVolumeSpecName "kube-api-access-gspgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 03:45:12.043921 kubelet[2681]: I0516 03:45:12.043509 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "81d5818d-511b-4617-9b37-6cbbb63bf4a0" (UID: "81d5818d-511b-4617-9b37-6cbbb63bf4a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 03:45:12.045135 systemd[1]: var-lib-kubelet-pods-81d5818d\x2d511b\x2d4617\x2d9b37\x2d6cbbb63bf4a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgspgm.mount: Deactivated successfully. May 16 03:45:12.045306 systemd[1]: var-lib-kubelet-pods-81d5818d\x2d511b\x2d4617\x2d9b37\x2d6cbbb63bf4a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 03:45:12.102738 systemd[1]: Started cri-containerd-2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777.scope - libcontainer container 2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777. May 16 03:45:12.128562 kubelet[2681]: I0516 03:45:12.128450 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gspgm\" (UniqueName: \"kubernetes.io/projected/81d5818d-511b-4617-9b37-6cbbb63bf4a0-kube-api-access-gspgm\") on node \"ci-4284-0-0-n-184e873f92.novalocal\" DevicePath \"\"" May 16 03:45:12.128562 kubelet[2681]: I0516 03:45:12.128507 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-ca-bundle\") on node \"ci-4284-0-0-n-184e873f92.novalocal\" DevicePath \"\"" May 16 03:45:12.128562 kubelet[2681]: I0516 03:45:12.128523 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81d5818d-511b-4617-9b37-6cbbb63bf4a0-whisker-backend-key-pair\") on node \"ci-4284-0-0-n-184e873f92.novalocal\" DevicePath \"\"" May 16 03:45:12.175914 containerd[1487]: time="2025-05-16T03:45:12.175581315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"4129c0f36abc83d9e4edd3810f3dbd2f66aac9dbf3965e35ab7ab667c11af04e\" pid:3872 exit_status:1 exited_at:{seconds:1747367112 nanos:174904612}" May 16 03:45:12.277180 containerd[1487]: time="2025-05-16T03:45:12.277099133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-n5m5g,Uid:f88d5892-6c0a-4970-a316-70b9e1bedfbf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777\"" May 16 03:45:12.282124 containerd[1487]: time="2025-05-16T03:45:12.281946912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 03:45:12.584207 systemd[1]: Removed slice kubepods-besteffort-pod81d5818d_511b_4617_9b37_6cbbb63bf4a0.slice - libcontainer container kubepods-besteffort-pod81d5818d_511b_4617_9b37_6cbbb63bf4a0.slice. May 16 03:45:12.969829 systemd-networkd[1398]: cali77d3f365030: Gained IPv6LL May 16 03:45:13.040279 systemd[1]: Created slice kubepods-besteffort-pod0e008683_cb25_4e5f_a2b2_2cb1da24fd95.slice - libcontainer container kubepods-besteffort-pod0e008683_cb25_4e5f_a2b2_2cb1da24fd95.slice. May 16 03:45:13.137126 kubelet[2681]: I0516 03:45:13.137068 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e008683-cb25-4e5f-a2b2-2cb1da24fd95-whisker-backend-key-pair\") pod \"whisker-6ddb445bcd-lff45\" (UID: \"0e008683-cb25-4e5f-a2b2-2cb1da24fd95\") " pod="calico-system/whisker-6ddb445bcd-lff45" May 16 03:45:13.137765 kubelet[2681]: I0516 03:45:13.137712 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e008683-cb25-4e5f-a2b2-2cb1da24fd95-whisker-ca-bundle\") pod \"whisker-6ddb445bcd-lff45\" (UID: \"0e008683-cb25-4e5f-a2b2-2cb1da24fd95\") " pod="calico-system/whisker-6ddb445bcd-lff45" May 16 03:45:13.137952 kubelet[2681]: I0516 03:45:13.137877 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnx8\" (UniqueName: \"kubernetes.io/projected/0e008683-cb25-4e5f-a2b2-2cb1da24fd95-kube-api-access-fpnx8\") pod \"whisker-6ddb445bcd-lff45\" (UID: \"0e008683-cb25-4e5f-a2b2-2cb1da24fd95\") " pod="calico-system/whisker-6ddb445bcd-lff45" May 16 03:45:13.349102 containerd[1487]: time="2025-05-16T03:45:13.348953544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddb445bcd-lff45,Uid:0e008683-cb25-4e5f-a2b2-2cb1da24fd95,Namespace:calico-system,Attempt:0,}" May 16 03:45:13.529205 containerd[1487]: time="2025-05-16T03:45:13.529127679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"c52ea996008640ed1da141e71389856b6180e1b5af358d2c218f3f3efbe180de\" pid:3948 exit_status:1 exited_at:{seconds:1747367113 nanos:526932240}" May 16 03:45:13.568711 containerd[1487]: time="2025-05-16T03:45:13.568655004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-hrzzl,Uid:f34dc699-7d69-4b18-87db-f8c213550070,Namespace:calico-apiserver,Attempt:0,}" May 16 03:45:13.674622 systemd-networkd[1398]: cali99a100a330e: Link UP May 16 03:45:13.678494 systemd-networkd[1398]: cali99a100a330e: Gained carrier May 16 03:45:13.711940 containerd[1487]: 2025-05-16 03:45:13.439 [INFO][4044] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 03:45:13.711940 containerd[1487]: 2025-05-16 03:45:13.479 [INFO][4044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0 whisker-6ddb445bcd- calico-system 0e008683-cb25-4e5f-a2b2-2cb1da24fd95 952 0 2025-05-16 03:45:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ddb445bcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4284-0-0-n-184e873f92.novalocal whisker-6ddb445bcd-lff45 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali99a100a330e [] [] }} ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-" May 16 03:45:13.711940 containerd[1487]: 2025-05-16 03:45:13.479 [INFO][4044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.711940 containerd[1487]: 2025-05-16 03:45:13.555 [INFO][4058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" HandleID="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.556 [INFO][4058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" HandleID="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-184e873f92.novalocal", "pod":"whisker-6ddb445bcd-lff45", "timestamp":"2025-05-16 03:45:13.555836336 +0000 UTC"}, Hostname:"ci-4284-0-0-n-184e873f92.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.556 [INFO][4058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.556 [INFO][4058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.557 [INFO][4058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-184e873f92.novalocal' May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.570 [INFO][4058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.592 [INFO][4058] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.611 [INFO][4058] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.618 [INFO][4058] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.712282 containerd[1487]: 2025-05-16 03:45:13.626 [INFO][4058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.626 [INFO][4058] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.633 [INFO][4058] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.643 [INFO][4058] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.657 [INFO][4058] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.122.2/26] block=192.168.122.0/26 handle="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.663 [INFO][4058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.2/26] handle="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.663 [INFO][4058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 03:45:13.714805 containerd[1487]: 2025-05-16 03:45:13.663 [INFO][4058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.2/26] IPv6=[] ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" HandleID="k8s-pod-network.4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.715061 containerd[1487]: 2025-05-16 03:45:13.666 [INFO][4044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0", GenerateName:"whisker-6ddb445bcd-", Namespace:"calico-system", SelfLink:"", UID:"0e008683-cb25-4e5f-a2b2-2cb1da24fd95", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ddb445bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"", Pod:"whisker-6ddb445bcd-lff45", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99a100a330e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:13.717694 containerd[1487]: 2025-05-16 03:45:13.666 [INFO][4044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.2/32] ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.717694 containerd[1487]: 2025-05-16 03:45:13.666 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99a100a330e ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.717694 containerd[1487]: 2025-05-16 03:45:13.676 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.717848 containerd[1487]: 2025-05-16 03:45:13.682 [INFO][4044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0", GenerateName:"whisker-6ddb445bcd-", Namespace:"calico-system", SelfLink:"", UID:"0e008683-cb25-4e5f-a2b2-2cb1da24fd95", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 45, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ddb445bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e", Pod:"whisker-6ddb445bcd-lff45", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99a100a330e", MAC:"b6:db:3a:5f:56:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:13.719935 containerd[1487]: 2025-05-16 03:45:13.707 [INFO][4044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" Namespace="calico-system" Pod="whisker-6ddb445bcd-lff45" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-whisker--6ddb445bcd--lff45-eth0" May 16 03:45:13.780387 containerd[1487]: time="2025-05-16T03:45:13.780165009Z" level=info msg="connecting to shim 4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e" address="unix:///run/containerd/s/4736ad5151115912e4653ecd9d907a0abe4ca7edf66ee121b6f014cbd21cefe0" namespace=k8s.io protocol=ttrpc version=3 May 16 03:45:13.879810 systemd[1]: Started cri-containerd-4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e.scope - libcontainer container 4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e. May 16 03:45:13.949682 systemd-networkd[1398]: cali309298626d0: Link UP May 16 03:45:13.954280 systemd-networkd[1398]: cali309298626d0: Gained carrier May 16 03:45:13.995449 containerd[1487]: 2025-05-16 03:45:13.647 [INFO][4065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 03:45:13.995449 containerd[1487]: 2025-05-16 03:45:13.702 [INFO][4065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0 calico-apiserver-54bddd6c9b- calico-apiserver f34dc699-7d69-4b18-87db-f8c213550070 868 0 2025-05-16 03:44:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bddd6c9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-n-184e873f92.novalocal calico-apiserver-54bddd6c9b-hrzzl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali309298626d0 [] [] }} ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-" May 16 03:45:13.995449 containerd[1487]: 2025-05-16 03:45:13.702 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.995449 containerd[1487]: 2025-05-16 03:45:13.817 [INFO][4084] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" HandleID="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.817 [INFO][4084] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" HandleID="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-n-184e873f92.novalocal", "pod":"calico-apiserver-54bddd6c9b-hrzzl", "timestamp":"2025-05-16 03:45:13.817571435 +0000 UTC"}, Hostname:"ci-4284-0-0-n-184e873f92.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.817 [INFO][4084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.817 [INFO][4084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.818 [INFO][4084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-184e873f92.novalocal' May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.835 [INFO][4084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.848 [INFO][4084] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.867 [INFO][4084] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.876 [INFO][4084] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996127 containerd[1487]: 2025-05-16 03:45:13.881 [INFO][4084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.881 [INFO][4084] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.884 [INFO][4084] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.906 [INFO][4084] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.920 [INFO][4084] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.122.3/26] block=192.168.122.0/26 handle="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.920 [INFO][4084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.3/26] handle="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" host="ci-4284-0-0-n-184e873f92.novalocal" May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.920 [INFO][4084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 03:45:13.996641 containerd[1487]: 2025-05-16 03:45:13.920 [INFO][4084] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.3/26] IPv6=[] ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" HandleID="k8s-pod-network.a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Workload="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.997696 containerd[1487]: 2025-05-16 03:45:13.927 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0", GenerateName:"calico-apiserver-54bddd6c9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f34dc699-7d69-4b18-87db-f8c213550070", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bddd6c9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"", Pod:"calico-apiserver-54bddd6c9b-hrzzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309298626d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:13.997779 containerd[1487]: 2025-05-16 03:45:13.928 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.3/32] ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.997779 containerd[1487]: 2025-05-16 03:45:13.928 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali309298626d0 ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.997779 containerd[1487]: 2025-05-16 03:45:13.958 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:13.997864 containerd[1487]: 2025-05-16 03:45:13.963 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0", GenerateName:"calico-apiserver-54bddd6c9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f34dc699-7d69-4b18-87db-f8c213550070", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 3, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bddd6c9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-184e873f92.novalocal", ContainerID:"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc", Pod:"calico-apiserver-54bddd6c9b-hrzzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali309298626d0", MAC:"56:1e:60:5c:3f:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 03:45:13.997943 containerd[1487]: 2025-05-16 03:45:13.986 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc" Namespace="calico-apiserver" Pod="calico-apiserver-54bddd6c9b-hrzzl" WorkloadEndpoint="ci--4284--0--0--n--184e873f92.novalocal-k8s-calico--apiserver--54bddd6c9b--hrzzl-eth0" May 16 03:45:14.059448 containerd[1487]: time="2025-05-16T03:45:14.059307383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ddb445bcd-lff45,Uid:0e008683-cb25-4e5f-a2b2-2cb1da24fd95,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e\"" May 16 03:45:14.213379 kernel: bpftool[4176]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 03:45:14.576278 containerd[1487]: time="2025-05-16T03:45:14.576209473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-2l87g,Uid:c45bc411-b957-4b16-8a3d-04432d83d3b5,Namespace:calico-system,Attempt:0,}" May 16 03:45:14.585502 containerd[1487]: time="2025-05-16T03:45:14.585144433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84596d4c7-k2qxb,Uid:67ef62ce-b02f-4a5b-bac2-8712ad130e81,Namespace:calico-system,Attempt:0,}" May 16 03:45:14.599155 kubelet[2681]: I0516 03:45:14.599084 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d5818d-511b-4617-9b37-6cbbb63bf4a0" path="/var/lib/kubelet/pods/81d5818d-511b-4617-9b37-6cbbb63bf4a0/volumes" May 16 03:45:14.641983 systemd-networkd[1398]: vxlan.calico: Link UP May 16 03:45:14.641993 systemd-networkd[1398]: vxlan.calico: Gained carrier May 16 03:45:14.760574 systemd-networkd[1398]: cali99a100a330e: Gained IPv6LL May 16 03:45:15.082668 systemd-networkd[1398]: cali309298626d0: Gained IPv6LL May 16 03:45:15.569872 containerd[1487]: time="2025-05-16T03:45:15.569641320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9nrc7,Uid:d01d1078-b043-4276-a3e9-cede61f0e64b,Namespace:kube-system,Attempt:0,}" May 16 03:45:16.233116 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL May 16 03:45:17.577775 containerd[1487]: time="2025-05-16T03:45:17.577493385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:17.579838 containerd[1487]: time="2025-05-16T03:45:17.579780674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 16 03:45:17.581455 containerd[1487]: time="2025-05-16T03:45:17.581410497Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:17.584551 containerd[1487]: time="2025-05-16T03:45:17.584500766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 03:45:17.585399 containerd[1487]: time="2025-05-16T03:45:17.585355682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 5.303341975s" May 16 03:45:17.585507 containerd[1487]: time="2025-05-16T03:45:17.585401268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 03:45:17.587961 containerd[1487]: time="2025-05-16T03:45:17.587899573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 03:45:17.594736 containerd[1487]: time="2025-05-16T03:45:17.594685437Z" level=info msg="CreateContainer within sandbox \"2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 03:45:17.636092 containerd[1487]: time="2025-05-16T03:45:17.634868558Z" level=info msg="Container c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab: CDI devices from CRI Config.CDIDevices: []" May 16 03:45:17.655298 containerd[1487]: time="2025-05-16T03:45:17.655236840Z" level=info msg="CreateContainer within sandbox \"2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab\"" May 16 03:45:17.656523 containerd[1487]: time="2025-05-16T03:45:17.656484184Z" level=info msg="StartContainer for \"c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab\"" May 16 03:45:17.657926 containerd[1487]: time="2025-05-16T03:45:17.657781793Z" level=info msg="connecting to shim c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab" address="unix:///run/containerd/s/ff40aa2050a18d455aa8cd68a6ebcb084ad1968254770c182cc6838319557848" protocol=ttrpc version=3 May 16 03:45:17.699706 systemd[1]: Started cri-containerd-c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab.scope - libcontainer container c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab. May 16 03:45:17.769707 containerd[1487]: time="2025-05-16T03:45:17.769629473Z" level=info msg="StartContainer for \"c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab\" returns successfully" May 16 03:45:17.963710 containerd[1487]: time="2025-05-16T03:45:17.963481415Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 03:45:17.966057 containerd[1487]: time="2025-05-16T03:45:17.965927733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 03:45:17.966700 containerd[1487]: time="2025-05-16T03:45:17.965981995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 03:45:17.967617 kubelet[2681]: E0516 03:45:17.967455 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 03:45:17.967617 kubelet[2681]: E0516 03:45:17.967579 2681 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 03:45:17.978505 kubelet[2681]: E0516 03:45:17.977690 2681 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:76cbc6ad02f847aa9e0def24a4623d07,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fpnx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ddb445bcd-lff45_calico-system(0e008683-cb25-4e5f-a2b2-2cb1da24fd95): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 03:45:17.980680 containerd[1487]: time="2025-05-16T03:45:17.980568770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 03:45:18.336435 containerd[1487]: time="2025-05-16T03:45:18.335519966Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 03:45:18.343017 containerd[1487]: time="2025-05-16T03:45:18.341556409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 03:45:18.343017 containerd[1487]: time="2025-05-16T03:45:18.342646808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 03:45:18.344465 kubelet[2681]: E0516 03:45:18.343838 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 03:45:18.344465 kubelet[2681]: E0516 03:45:18.344006 2681 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 03:45:18.347422 kubelet[2681]: E0516 03:45:18.346855 2681 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpnx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ddb445bcd-lff45_calico-system(0e008683-cb25-4e5f-a2b2-2cb1da24fd95): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 03:45:18.349138 kubelet[2681]: E0516 03:45:18.348629 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ddb445bcd-lff45" podUID="0e008683-cb25-4e5f-a2b2-2cb1da24fd95" May 16 03:45:18.974209 kubelet[2681]: E0516 03:45:18.974054 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ddb445bcd-lff45" podUID="0e008683-cb25-4e5f-a2b2-2cb1da24fd95" May 16 03:45:19.010224 kubelet[2681]: I0516 03:45:19.009985 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bddd6c9b-n5m5g" podStartSLOduration=34.703678543 podStartE2EDuration="40.009884563s" podCreationTimestamp="2025-05-16 03:44:39 +0000 UTC" firstStartedPulling="2025-05-16 03:45:12.280760169 +0000 UTC m=+49.924259077" lastFinishedPulling="2025-05-16 03:45:17.586966239 +0000 UTC m=+55.230465097" observedRunningTime="2025-05-16 03:45:17.989984426 +0000 UTC m=+55.633483304" watchObservedRunningTime="2025-05-16 03:45:19.009884563 +0000 UTC m=+56.653383421" May 16 03:45:23.575473 containerd[1487]: time="2025-05-16T03:45:23.574565377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pd7wp,Uid:ede9dc66-0a0b-4839-a3fa-93460a933576,Namespace:kube-system,Attempt:0,}" May 16 03:45:26.577414 containerd[1487]: time="2025-05-16T03:45:26.575757205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nkgqf,Uid:2d89f57b-12d9-441c-854f-90be519acbd7,Namespace:calico-system,Attempt:0,}" May 16 03:45:33.571389 containerd[1487]: time="2025-05-16T03:45:33.571200956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 03:45:33.925399 containerd[1487]: time="2025-05-16T03:45:33.925029269Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 03:45:33.927644 containerd[1487]: time="2025-05-16T03:45:33.927269400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 03:45:33.927644 containerd[1487]: time="2025-05-16T03:45:33.927532896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 03:45:33.928115 kubelet[2681]: E0516 03:45:33.927957 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 03:45:33.930135 kubelet[2681]: E0516 03:45:33.928151 2681 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 03:45:33.930135 kubelet[2681]: E0516 03:45:33.928588 2681 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:76cbc6ad02f847aa9e0def24a4623d07,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fpnx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ddb445bcd-lff45_calico-system(0e008683-cb25-4e5f-a2b2-2cb1da24fd95): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 03:45:33.933795 containerd[1487]: time="2025-05-16T03:45:33.933582567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 03:45:34.292692 containerd[1487]: time="2025-05-16T03:45:34.292035920Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 03:45:34.293911 containerd[1487]: time="2025-05-16T03:45:34.293770538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 03:45:34.293911 containerd[1487]: time="2025-05-16T03:45:34.293823967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 03:45:34.295030 kubelet[2681]: E0516 03:45:34.294264 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 03:45:34.295030 kubelet[2681]: E0516 03:45:34.294361 2681 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 03:45:34.295030 kubelet[2681]: E0516 03:45:34.294553 2681 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpnx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6ddb445bcd-lff45_calico-system(0e008683-cb25-4e5f-a2b2-2cb1da24fd95): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 03:45:34.296748 kubelet[2681]: E0516 03:45:34.296675 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6ddb445bcd-lff45" podUID="0e008683-cb25-4e5f-a2b2-2cb1da24fd95" May 16 03:45:43.098265 containerd[1487]: time="2025-05-16T03:45:43.097518208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"a8a37eedd67eadc1a1b50de75bbd52160cc49e06c4bc0bf8bc8715d93a6cda43\" pid:4345 exited_at:{seconds:1747367143 nanos:96170108}" May 16 03:45:43.106382 kubelet[2681]: E0516 03:45:43.102748 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:43.203928 kubelet[2681]: E0516 03:45:43.203866 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:43.404677 kubelet[2681]: E0516 03:45:43.404306 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:43.805162 kubelet[2681]: E0516 03:45:43.805066 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:44.196782 kubelet[2681]: I0516 03:45:44.194495 2681 setters.go:618] "Node became not ready" node="ci-4284-0-0-n-184e873f92.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T03:45:44Z","lastTransitionTime":"2025-05-16T03:45:44Z","reason":"KubeletNotReady","message":"container runtime is down"} May 16 03:45:44.605967 kubelet[2681]: E0516 03:45:44.605699 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:46.207265 kubelet[2681]: E0516 03:45:46.207197 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:49.417837 kubelet[2681]: E0516 03:45:49.417757 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:54.418897 kubelet[2681]: E0516 03:45:54.418671 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:45:59.420396 kubelet[2681]: E0516 03:45:59.419639 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:04.421004 kubelet[2681]: E0516 03:46:04.420882 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:09.422002 kubelet[2681]: E0516 03:46:09.421939 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:13.053972 containerd[1487]: time="2025-05-16T03:46:13.053562764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"0c9b795727957f24a315d9e71bccbd60954631f02fc6566194bd446aee4865c2\" pid:4382 exited_at:{seconds:1747367173 nanos:52642568}" May 16 03:46:14.423293 kubelet[2681]: E0516 03:46:14.423193 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:19.424105 kubelet[2681]: E0516 03:46:19.424039 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:24.425146 kubelet[2681]: E0516 03:46:24.425088 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:29.425488 kubelet[2681]: E0516 03:46:29.425427 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:34.426562 kubelet[2681]: E0516 03:46:34.426517 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:39.427306 kubelet[2681]: E0516 03:46:39.427250 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:43.044278 containerd[1487]: time="2025-05-16T03:46:43.043899957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"6cc18b7ce3b941a57c4403004e741515d8ee8a3c40549632f2454ed3a3307c25\" pid:4421 exited_at:{seconds:1747367203 nanos:43049336}" May 16 03:46:44.428194 kubelet[2681]: E0516 03:46:44.428130 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:49.429379 kubelet[2681]: E0516 03:46:49.429132 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:54.430169 kubelet[2681]: E0516 03:46:54.430096 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:46:59.430748 kubelet[2681]: E0516 03:46:59.430597 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:04.431354 kubelet[2681]: E0516 03:47:04.431267 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:09.432524 kubelet[2681]: E0516 03:47:09.432409 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:13.182618 containerd[1487]: time="2025-05-16T03:47:13.182504719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"1a3a748980cb377c31aef50a7b45403c24e3a6a82909844c96844efa68b010b3\" pid:4462 exited_at:{seconds:1747367233 nanos:181573958}" May 16 03:47:14.432925 kubelet[2681]: E0516 03:47:14.432865 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:17.674611 kubelet[2681]: E0516 03:47:17.674517 2681 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:47:17.675214 kubelet[2681]: E0516 03:47:17.674649 2681 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:47:19.433746 kubelet[2681]: E0516 03:47:19.433671 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:24.433877 kubelet[2681]: E0516 03:47:24.433829 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:29.435069 kubelet[2681]: E0516 03:47:29.434990 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:34.436019 kubelet[2681]: E0516 03:47:34.435930 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:39.437224 kubelet[2681]: E0516 03:47:39.436996 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:43.046473 containerd[1487]: time="2025-05-16T03:47:43.045221525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"9bc214ddfbe7f41e3f0c6648ecbf9eb70b80b69976c33d4ea9cf890b149f53be\" pid:4490 exited_at:{seconds:1747367263 nanos:44445095}" May 16 03:47:44.438556 kubelet[2681]: E0516 03:47:44.438239 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:49.438913 kubelet[2681]: E0516 03:47:49.438854 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:54.439704 kubelet[2681]: E0516 03:47:54.439644 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:47:59.440281 kubelet[2681]: E0516 03:47:59.440223 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:04.441224 kubelet[2681]: E0516 03:48:04.441140 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:09.441432 kubelet[2681]: E0516 03:48:09.441359 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:13.208708 containerd[1487]: time="2025-05-16T03:48:13.208141371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"6a550e9dbc0c2bfb8c5d46e06686a4554a50063f661183aac51ccaeaf37bd71a\" pid:4525 exited_at:{seconds:1747367293 nanos:204991806}" May 16 03:48:14.441691 kubelet[2681]: E0516 03:48:14.441609 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:19.454574 kubelet[2681]: E0516 03:48:19.453939 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:24.454974 kubelet[2681]: E0516 03:48:24.454893 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:29.455134 kubelet[2681]: E0516 03:48:29.455056 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:34.455987 kubelet[2681]: E0516 03:48:34.455592 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:35.263274 update_engine[1467]: I20250516 03:48:35.262890 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 16 03:48:35.263274 update_engine[1467]: I20250516 03:48:35.263283 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 16 03:48:35.266409 update_engine[1467]: I20250516 03:48:35.264845 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 16 03:48:35.271558 update_engine[1467]: I20250516 03:48:35.270704 1467 omaha_request_params.cc:62] Current group set to alpha May 16 03:48:35.276041 update_engine[1467]: I20250516 03:48:35.275914 1467 update_attempter.cc:499] Already updated boot flags. Skipping. May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.276427 1467 update_attempter.cc:643] Scheduling an action processor start. May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.276541 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.276731 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.277004 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.277034 1467 omaha_request_action.cc:272] Request: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: May 16 03:48:35.277402 update_engine[1467]: I20250516 03:48:35.277070 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 03:48:35.296746 update_engine[1467]: I20250516 03:48:35.294976 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 03:48:35.297049 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 16 03:48:35.301281 update_engine[1467]: I20250516 03:48:35.301065 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 03:48:35.308413 update_engine[1467]: E20250516 03:48:35.308224 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 03:48:35.309690 update_engine[1467]: I20250516 03:48:35.309628 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 16 03:48:39.455974 kubelet[2681]: E0516 03:48:39.455879 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:43.094061 containerd[1487]: time="2025-05-16T03:48:43.093956110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"d5e4a9502fa1436e919f71be66dabbd3b8f027de063cb92358dcde2e708e45db\" pid:4573 exited_at:{seconds:1747367323 nanos:93056542}" May 16 03:48:44.457012 kubelet[2681]: E0516 03:48:44.456940 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:45.170487 update_engine[1467]: I20250516 03:48:45.170401 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 03:48:45.170983 update_engine[1467]: I20250516 03:48:45.170692 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 03:48:45.171096 update_engine[1467]: I20250516 03:48:45.171068 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 03:48:45.176171 update_engine[1467]: E20250516 03:48:45.175886 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 03:48:45.176171 update_engine[1467]: I20250516 03:48:45.175957 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 16 03:48:49.457219 kubelet[2681]: E0516 03:48:49.457124 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:54.458445 kubelet[2681]: E0516 03:48:54.457949 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:55.172468 update_engine[1467]: I20250516 03:48:55.171217 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 03:48:55.173156 update_engine[1467]: I20250516 03:48:55.172474 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 03:48:55.173658 update_engine[1467]: I20250516 03:48:55.173594 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 03:48:55.178766 update_engine[1467]: E20250516 03:48:55.178701 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 03:48:55.178928 update_engine[1467]: I20250516 03:48:55.178907 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 16 03:48:57.524428 systemd[1]: Started sshd@9-172.24.4.212:22-172.24.4.1:51790.service - OpenSSH per-connection server daemon (172.24.4.1:51790). May 16 03:48:59.057270 sshd[4591]: Accepted publickey for core from 172.24.4.1 port 51790 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:48:59.065079 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:48:59.083725 systemd-logind[1465]: New session 12 of user core. May 16 03:48:59.089897 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 03:48:59.458708 kubelet[2681]: E0516 03:48:59.458546 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:48:59.931122 sshd[4595]: Connection closed by 172.24.4.1 port 51790 May 16 03:48:59.932863 sshd-session[4591]: pam_unix(sshd:session): session closed for user core May 16 03:48:59.944477 systemd[1]: sshd@9-172.24.4.212:22-172.24.4.1:51790.service: Deactivated successfully. May 16 03:48:59.951617 systemd[1]: session-12.scope: Deactivated successfully. May 16 03:48:59.953018 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. May 16 03:48:59.957051 systemd-logind[1465]: Removed session 12. May 16 03:49:04.459222 kubelet[2681]: E0516 03:49:04.459126 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:04.951664 systemd[1]: Started sshd@10-172.24.4.212:22-172.24.4.1:55656.service - OpenSSH per-connection server daemon (172.24.4.1:55656). May 16 03:49:05.168454 update_engine[1467]: I20250516 03:49:05.167012 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 03:49:05.168454 update_engine[1467]: I20250516 03:49:05.167956 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 03:49:05.170426 update_engine[1467]: I20250516 03:49:05.170086 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 03:49:05.175514 update_engine[1467]: E20250516 03:49:05.175426 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 03:49:05.176820 update_engine[1467]: I20250516 03:49:05.175881 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 03:49:05.176820 update_engine[1467]: I20250516 03:49:05.175912 1467 omaha_request_action.cc:617] Omaha request response: May 16 03:49:05.176820 update_engine[1467]: E20250516 03:49:05.176237 1467 omaha_request_action.cc:636] Omaha request network transfer failed. May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177299 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177332 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177379 1467 update_attempter.cc:306] Processing Done. May 16 03:49:05.178364 update_engine[1467]: E20250516 03:49:05.177464 1467 update_attempter.cc:619] Update failed. May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177484 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177493 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177510 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177719 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177832 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177842 1467 omaha_request_action.cc:272] Request: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.177850 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 03:49:05.178364 update_engine[1467]: I20250516 03:49:05.178030 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 03:49:05.179084 update_engine[1467]: I20250516 03:49:05.178334 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 03:49:05.180615 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 16 03:49:05.183414 update_engine[1467]: E20250516 03:49:05.183387 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 03:49:05.183564 update_engine[1467]: I20250516 03:49:05.183530 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183616 1467 omaha_request_action.cc:617] Omaha request response: May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183643 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183650 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183656 1467 update_attempter.cc:306] Processing Done. May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183663 1467 update_attempter.cc:310] Error event sent. May 16 03:49:05.183825 update_engine[1467]: I20250516 03:49:05.183681 1467 update_check_scheduler.cc:74] Next update check in 42m39s May 16 03:49:05.184372 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 16 03:49:06.251972 sshd[4612]: Accepted publickey for core from 172.24.4.1 port 55656 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:06.256563 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:06.273041 systemd-logind[1465]: New session 13 of user core. May 16 03:49:06.279544 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 03:49:07.081283 sshd[4614]: Connection closed by 172.24.4.1 port 55656 May 16 03:49:07.081157 sshd-session[4612]: pam_unix(sshd:session): session closed for user core May 16 03:49:07.086280 systemd[1]: sshd@10-172.24.4.212:22-172.24.4.1:55656.service: Deactivated successfully. May 16 03:49:07.091185 systemd[1]: session-13.scope: Deactivated successfully. May 16 03:49:07.100741 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. May 16 03:49:07.103764 systemd-logind[1465]: Removed session 13. May 16 03:49:09.460130 kubelet[2681]: E0516 03:49:09.460046 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:12.102814 systemd[1]: Started sshd@11-172.24.4.212:22-172.24.4.1:55662.service - OpenSSH per-connection server daemon (172.24.4.1:55662). May 16 03:49:13.121691 containerd[1487]: time="2025-05-16T03:49:13.117645518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"7ea3a408ce55d748a7d7346184ab754f831bb207870eb800c43064180b1f86ae\" pid:4643 exited_at:{seconds:1747367353 nanos:115495296}" May 16 03:49:13.333613 sshd[4628]: Accepted publickey for core from 172.24.4.1 port 55662 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:13.379307 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:13.388175 systemd-logind[1465]: New session 14 of user core. May 16 03:49:13.393520 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 03:49:13.568527 kubelet[2681]: E0516 03:49:13.568029 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:13.570247 kubelet[2681]: E0516 03:49:13.569713 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:49:13.570247 kubelet[2681]: E0516 03:49:13.569916 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:49:13.571712 kubelet[2681]: E0516 03:49:13.570714 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" podUID="f34dc699-7d69-4b18-87db-f8c213550070" May 16 03:49:13.906277 containerd[1487]: time="2025-05-16T03:49:13.906216459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-hrzzl,Uid:f34dc699-7d69-4b18-87db-f8c213550070,Namespace:calico-apiserver,Attempt:0,}" May 16 03:49:13.907854 containerd[1487]: time="2025-05-16T03:49:13.907421418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bddd6c9b-hrzzl,Uid:f34dc699-7d69-4b18-87db-f8c213550070,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\": name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\" is reserved for \"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc\"" May 16 03:49:13.908046 kubelet[2681]: E0516 03:49:13.907883 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\": name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\" is reserved for \"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc\"" May 16 03:49:13.908046 kubelet[2681]: E0516 03:49:13.907974 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\": name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\" is reserved for \"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:49:13.908046 kubelet[2681]: E0516 03:49:13.908013 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\": name \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\" is reserved for \"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" May 16 03:49:13.908201 kubelet[2681]: E0516 03:49:13.908099 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver(f34dc699-7d69-4b18-87db-f8c213550070)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\\\": name \\\"calico-apiserver-54bddd6c9b-hrzzl_calico-apiserver_f34dc699-7d69-4b18-87db-f8c213550070_0\\\" is reserved for \\\"a06511bb19c035b8b462a14363a6163793a3404be044174098dcf779d7f9c7dc\\\"\"" pod="calico-apiserver/calico-apiserver-54bddd6c9b-hrzzl" podUID="f34dc699-7d69-4b18-87db-f8c213550070" May 16 03:49:14.353420 sshd[4654]: Connection closed by 172.24.4.1 port 55662 May 16 03:49:14.354975 sshd-session[4628]: pam_unix(sshd:session): session closed for user core May 16 03:49:14.360684 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. May 16 03:49:14.361982 systemd[1]: sshd@11-172.24.4.212:22-172.24.4.1:55662.service: Deactivated successfully. May 16 03:49:14.366093 systemd[1]: session-14.scope: Deactivated successfully. May 16 03:49:14.369490 systemd-logind[1465]: Removed session 14. May 16 03:49:14.461494 kubelet[2681]: E0516 03:49:14.460771 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:14.575926 kubelet[2681]: E0516 03:49:14.575838 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:14.576364 kubelet[2681]: E0516 03:49:14.576034 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:49:14.576517 kubelet[2681]: E0516 03:49:14.576478 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/goldmane-78d55f7ddc-2l87g" May 16 03:49:14.578016 kubelet[2681]: E0516 03:49:14.576623 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-2l87g_calico-system(c45bc411-b957-4b16-8a3d-04432d83d3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-2l87g_calico-system(c45bc411-b957-4b16-8a3d-04432d83d3b5)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/goldmane-78d55f7ddc-2l87g" podUID="c45bc411-b957-4b16-8a3d-04432d83d3b5" May 16 03:49:14.579389 kubelet[2681]: E0516 03:49:14.579116 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:14.579389 kubelet[2681]: E0516 03:49:14.579183 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:49:14.579389 kubelet[2681]: E0516 03:49:14.579205 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" May 16 03:49:14.579389 kubelet[2681]: E0516 03:49:14.579251 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84596d4c7-k2qxb_calico-system(67ef62ce-b02f-4a5b-bac2-8712ad130e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84596d4c7-k2qxb_calico-system(67ef62ce-b02f-4a5b-bac2-8712ad130e81)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/calico-kube-controllers-84596d4c7-k2qxb" podUID="67ef62ce-b02f-4a5b-bac2-8712ad130e81" May 16 03:49:15.569219 kubelet[2681]: E0516 03:49:15.568723 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:15.569219 kubelet[2681]: E0516 03:49:15.569132 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:49:15.569219 kubelet[2681]: E0516 03:49:15.569234 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-674b8bbfcf-9nrc7" May 16 03:49:15.574562 kubelet[2681]: E0516 03:49:15.569743 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9nrc7_kube-system(d01d1078-b043-4276-a3e9-cede61f0e64b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9nrc7_kube-system(d01d1078-b043-4276-a3e9-cede61f0e64b)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-674b8bbfcf-9nrc7" podUID="d01d1078-b043-4276-a3e9-cede61f0e64b" May 16 03:49:16.169383 containerd[1487]: time="2025-05-16T03:49:16.168881156Z" level=warning msg="container event discarded" container=de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543 type=CONTAINER_CREATED_EVENT May 16 03:49:16.169383 containerd[1487]: time="2025-05-16T03:49:16.169330589Z" level=warning msg="container event discarded" container=de7395b3eff1594f1af5124720a3afc777223f601fce41c033f7bc537b9f2543 type=CONTAINER_STARTED_EVENT May 16 03:49:16.204626 containerd[1487]: time="2025-05-16T03:49:16.204464653Z" level=warning msg="container event discarded" container=2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc type=CONTAINER_CREATED_EVENT May 16 03:49:16.204626 containerd[1487]: time="2025-05-16T03:49:16.204543070Z" level=warning msg="container event discarded" container=2a68f8639e173c9db324f49d8fdaa9ac5c7bd6f1909ea7491b5c99ef96ad63bc type=CONTAINER_STARTED_EVENT May 16 03:49:16.217816 containerd[1487]: time="2025-05-16T03:49:16.217709372Z" level=warning msg="container event discarded" container=891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270 type=CONTAINER_CREATED_EVENT May 16 03:49:16.217816 containerd[1487]: time="2025-05-16T03:49:16.217773964Z" level=warning msg="container event discarded" container=a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989 type=CONTAINER_CREATED_EVENT May 16 03:49:16.217816 containerd[1487]: time="2025-05-16T03:49:16.217789733Z" level=warning msg="container event discarded" container=a9e6df64601ab4384966d769d69b4bbfd7e40edb42eb7f193672428126fe4989 type=CONTAINER_STARTED_EVENT May 16 03:49:16.256180 containerd[1487]: time="2025-05-16T03:49:16.256063414Z" level=warning msg="container event discarded" container=1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f type=CONTAINER_CREATED_EVENT May 16 03:49:16.256180 containerd[1487]: time="2025-05-16T03:49:16.256126592Z" level=warning msg="container event discarded" container=30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d type=CONTAINER_CREATED_EVENT May 16 03:49:16.336532 containerd[1487]: time="2025-05-16T03:49:16.336464718Z" level=warning msg="container event discarded" container=891ba4ce480f7509e58fca1cbe268116d5fc572e6d74c28740cf1e0b25f2a270 type=CONTAINER_STARTED_EVENT May 16 03:49:16.406968 containerd[1487]: time="2025-05-16T03:49:16.406804100Z" level=warning msg="container event discarded" container=30ea7bbc4ddd568f7a6d46208abb27805dfb65475ee595247acd4fcea491835d type=CONTAINER_STARTED_EVENT May 16 03:49:16.406968 containerd[1487]: time="2025-05-16T03:49:16.406900380Z" level=warning msg="container event discarded" container=1461e5179b8c2f3d7869b8491024e76119cba6b850737425f88abb1266d0c40f type=CONTAINER_STARTED_EVENT May 16 03:49:19.378823 systemd[1]: Started sshd@12-172.24.4.212:22-172.24.4.1:41370.service - OpenSSH per-connection server daemon (172.24.4.1:41370). May 16 03:49:19.461771 kubelet[2681]: E0516 03:49:19.461580 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:20.588416 sshd[4667]: Accepted publickey for core from 172.24.4.1 port 41370 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:20.589663 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:20.600691 systemd-logind[1465]: New session 15 of user core. May 16 03:49:20.609998 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 03:49:21.382304 sshd[4669]: Connection closed by 172.24.4.1 port 41370 May 16 03:49:21.383576 sshd-session[4667]: pam_unix(sshd:session): session closed for user core May 16 03:49:21.392418 systemd[1]: sshd@12-172.24.4.212:22-172.24.4.1:41370.service: Deactivated successfully. May 16 03:49:21.398080 systemd[1]: session-15.scope: Deactivated successfully. May 16 03:49:21.402232 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. May 16 03:49:21.404821 systemd-logind[1465]: Removed session 15. May 16 03:49:22.676163 kubelet[2681]: E0516 03:49:22.675989 2681 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:22.676163 kubelet[2681]: E0516 03:49:22.676103 2681 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:23.574821 kubelet[2681]: E0516 03:49:23.573367 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:23.574821 kubelet[2681]: E0516 03:49:23.573463 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:49:23.574821 kubelet[2681]: E0516 03:49:23.573513 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-674b8bbfcf-pd7wp" May 16 03:49:23.574821 kubelet[2681]: E0516 03:49:23.573677 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pd7wp_kube-system(ede9dc66-0a0b-4839-a3fa-93460a933576)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-674b8bbfcf-pd7wp" podUID="ede9dc66-0a0b-4839-a3fa-93460a933576" May 16 03:49:24.462315 kubelet[2681]: E0516 03:49:24.462258 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:26.403679 systemd[1]: Started sshd@13-172.24.4.212:22-172.24.4.1:54516.service - OpenSSH per-connection server daemon (172.24.4.1:54516). May 16 03:49:26.573231 kubelet[2681]: E0516 03:49:26.573169 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:49:26.573746 kubelet[2681]: E0516 03:49:26.573254 2681 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-nkgqf" May 16 03:49:26.573746 kubelet[2681]: E0516 03:49:26.573285 2681 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-nkgqf" May 16 03:49:26.573746 kubelet[2681]: E0516 03:49:26.573359 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nkgqf_calico-system(2d89f57b-12d9-441c-854f-90be519acbd7)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/csi-node-driver-nkgqf" podUID="2d89f57b-12d9-441c-854f-90be519acbd7" May 16 03:49:27.688518 sshd[4684]: Accepted publickey for core from 172.24.4.1 port 54516 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:27.694060 sshd-session[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:27.707492 systemd-logind[1465]: New session 16 of user core. May 16 03:49:27.714665 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 03:49:28.524628 sshd[4691]: Connection closed by 172.24.4.1 port 54516 May 16 03:49:28.526662 sshd-session[4684]: pam_unix(sshd:session): session closed for user core May 16 03:49:28.532621 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. May 16 03:49:28.534505 systemd[1]: sshd@13-172.24.4.212:22-172.24.4.1:54516.service: Deactivated successfully. May 16 03:49:28.542126 systemd[1]: session-16.scope: Deactivated successfully. May 16 03:49:28.547585 systemd-logind[1465]: Removed session 16. May 16 03:49:28.785597 containerd[1487]: time="2025-05-16T03:49:28.785012151Z" level=warning msg="container event discarded" container=5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b type=CONTAINER_CREATED_EVENT May 16 03:49:28.785597 containerd[1487]: time="2025-05-16T03:49:28.785086311Z" level=warning msg="container event discarded" container=5d216c2f3da8553b3f7d67c6db2d0801d2ed9a1de35f750eb119bba6110cf08b type=CONTAINER_STARTED_EVENT May 16 03:49:28.823565 containerd[1487]: time="2025-05-16T03:49:28.823415482Z" level=warning msg="container event discarded" container=2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066 type=CONTAINER_CREATED_EVENT May 16 03:49:28.922811 containerd[1487]: time="2025-05-16T03:49:28.922702258Z" level=warning msg="container event discarded" container=2258417bca2adab894bd83902794bd57ccbfdc4e088bff42ecc8487e4c429066 type=CONTAINER_STARTED_EVENT May 16 03:49:28.991172 containerd[1487]: time="2025-05-16T03:49:28.991065886Z" level=warning msg="container event discarded" container=ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740 type=CONTAINER_CREATED_EVENT May 16 03:49:28.991172 containerd[1487]: time="2025-05-16T03:49:28.991133212Z" level=warning msg="container event discarded" container=ec7d3fabb546021eb28f2f1fc106aa6a044e40029e7cf4076eff11f94c182740 type=CONTAINER_STARTED_EVENT May 16 03:49:29.462581 kubelet[2681]: E0516 03:49:29.462524 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:31.699201 containerd[1487]: time="2025-05-16T03:49:31.698933634Z" level=warning msg="container event discarded" container=fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6 type=CONTAINER_CREATED_EVENT May 16 03:49:31.770188 containerd[1487]: time="2025-05-16T03:49:31.770117341Z" level=warning msg="container event discarded" container=fcf9dd27accd17bb045bbdce2a0e291c5576ce8f83fad4f452a19a9551bda4a6 type=CONTAINER_STARTED_EVENT May 16 03:49:33.550448 systemd[1]: Started sshd@14-172.24.4.212:22-172.24.4.1:59168.service - OpenSSH per-connection server daemon (172.24.4.1:59168). May 16 03:49:34.463552 kubelet[2681]: E0516 03:49:34.463486 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:35.053368 sshd[4706]: Accepted publickey for core from 172.24.4.1 port 59168 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:35.053241 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:35.067089 systemd-logind[1465]: New session 17 of user core. May 16 03:49:35.075559 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 03:49:35.787485 sshd[4708]: Connection closed by 172.24.4.1 port 59168 May 16 03:49:35.792280 sshd-session[4706]: pam_unix(sshd:session): session closed for user core May 16 03:49:35.800298 systemd[1]: sshd@14-172.24.4.212:22-172.24.4.1:59168.service: Deactivated successfully. May 16 03:49:35.803902 systemd[1]: session-17.scope: Deactivated successfully. May 16 03:49:35.805593 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. May 16 03:49:35.807949 systemd-logind[1465]: Removed session 17. May 16 03:49:39.464773 kubelet[2681]: E0516 03:49:39.464609 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:40.805753 systemd[1]: Started sshd@15-172.24.4.212:22-172.24.4.1:59176.service - OpenSSH per-connection server daemon (172.24.4.1:59176). May 16 03:49:42.047591 sshd[4721]: Accepted publickey for core from 172.24.4.1 port 59176 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:42.053987 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:42.072953 systemd-logind[1465]: New session 18 of user core. May 16 03:49:42.081753 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 03:49:42.787294 sshd[4723]: Connection closed by 172.24.4.1 port 59176 May 16 03:49:42.788868 sshd-session[4721]: pam_unix(sshd:session): session closed for user core May 16 03:49:42.792522 systemd[1]: sshd@15-172.24.4.212:22-172.24.4.1:59176.service: Deactivated successfully. May 16 03:49:42.795367 systemd[1]: session-18.scope: Deactivated successfully. May 16 03:49:42.796455 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. May 16 03:49:42.800430 systemd-logind[1465]: Removed session 18. May 16 03:49:43.135538 containerd[1487]: time="2025-05-16T03:49:43.132924223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"5f74dca5a156a31b3781a58eefb3b8cc05318cf87a9d20c4ce14a7ec5e931d90\" pid:4747 exited_at:{seconds:1747367383 nanos:122856065}" May 16 03:49:43.753411 containerd[1487]: time="2025-05-16T03:49:43.752903964Z" level=warning msg="container event discarded" container=ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f type=CONTAINER_CREATED_EVENT May 16 03:49:43.753411 containerd[1487]: time="2025-05-16T03:49:43.753233122Z" level=warning msg="container event discarded" container=ebdce10fa1eb10c3b73b14f12d7accc81fe177196be1fc88d83590e80e34383f type=CONTAINER_STARTED_EVENT May 16 03:49:44.031999 containerd[1487]: time="2025-05-16T03:49:44.031851075Z" level=warning msg="container event discarded" container=cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579 type=CONTAINER_CREATED_EVENT May 16 03:49:44.031999 containerd[1487]: time="2025-05-16T03:49:44.031937107Z" level=warning msg="container event discarded" container=cc0aa20a8a3b98d1d99075009a985ea612a8309ff328392630339074d7980579 type=CONTAINER_STARTED_EVENT May 16 03:49:44.465939 kubelet[2681]: E0516 03:49:44.465851 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:47.648661 containerd[1487]: time="2025-05-16T03:49:47.648488310Z" level=warning msg="container event discarded" container=3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079 type=CONTAINER_CREATED_EVENT May 16 03:49:47.769067 containerd[1487]: time="2025-05-16T03:49:47.768990203Z" level=warning msg="container event discarded" container=3cdf202b44c55c8abef0d569d106d36910d28689d4d19facff59b8f75fc69079 type=CONTAINER_STARTED_EVENT May 16 03:49:47.808323 systemd[1]: Started sshd@16-172.24.4.212:22-172.24.4.1:38166.service - OpenSSH per-connection server daemon (172.24.4.1:38166). May 16 03:49:48.987015 sshd[4760]: Accepted publickey for core from 172.24.4.1 port 38166 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:48.992274 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:49.003446 systemd-logind[1465]: New session 19 of user core. May 16 03:49:49.009531 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 03:49:49.466904 kubelet[2681]: E0516 03:49:49.466819 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:49.677368 sshd[4762]: Connection closed by 172.24.4.1 port 38166 May 16 03:49:49.680625 sshd-session[4760]: pam_unix(sshd:session): session closed for user core May 16 03:49:49.684805 systemd[1]: sshd@16-172.24.4.212:22-172.24.4.1:38166.service: Deactivated successfully. May 16 03:49:49.688482 systemd[1]: session-19.scope: Deactivated successfully. May 16 03:49:49.693449 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. May 16 03:49:49.695816 systemd-logind[1465]: Removed session 19. May 16 03:49:49.751669 containerd[1487]: time="2025-05-16T03:49:49.751241672Z" level=warning msg="container event discarded" container=b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc type=CONTAINER_CREATED_EVENT May 16 03:49:49.853159 containerd[1487]: time="2025-05-16T03:49:49.853088704Z" level=warning msg="container event discarded" container=b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc type=CONTAINER_STARTED_EVENT May 16 03:49:50.803723 containerd[1487]: time="2025-05-16T03:49:50.803635095Z" level=warning msg="container event discarded" container=b60fe89180f683924502a3994efa8e22436472ff7c316266e1e4aa78118c8dbc type=CONTAINER_STOPPED_EVENT May 16 03:49:54.467412 kubelet[2681]: E0516 03:49:54.467363 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:49:54.689951 systemd[1]: Started sshd@17-172.24.4.212:22-172.24.4.1:37364.service - OpenSSH per-connection server daemon (172.24.4.1:37364). May 16 03:49:55.962857 sshd[4775]: Accepted publickey for core from 172.24.4.1 port 37364 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:49:55.965405 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:49:55.975259 systemd-logind[1465]: New session 20 of user core. May 16 03:49:55.984856 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 03:49:56.837005 sshd[4777]: Connection closed by 172.24.4.1 port 37364 May 16 03:49:56.837625 sshd-session[4775]: pam_unix(sshd:session): session closed for user core May 16 03:49:56.843013 systemd[1]: sshd@17-172.24.4.212:22-172.24.4.1:37364.service: Deactivated successfully. May 16 03:49:56.846072 systemd[1]: session-20.scope: Deactivated successfully. May 16 03:49:56.847330 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. May 16 03:49:56.848899 systemd-logind[1465]: Removed session 20. May 16 03:49:57.032684 containerd[1487]: time="2025-05-16T03:49:57.032578674Z" level=warning msg="container event discarded" container=36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946 type=CONTAINER_CREATED_EVENT May 16 03:49:57.137699 containerd[1487]: time="2025-05-16T03:49:57.137486627Z" level=warning msg="container event discarded" container=36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946 type=CONTAINER_STARTED_EVENT May 16 03:49:59.460997 containerd[1487]: time="2025-05-16T03:49:59.460913399Z" level=warning msg="container event discarded" container=36e8ceebee4a935e4e912c841bdd8530996e1ba301241d5a3ee66615a4b57946 type=CONTAINER_STOPPED_EVENT May 16 03:49:59.467834 kubelet[2681]: E0516 03:49:59.467783 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:01.864858 systemd[1]: Started sshd@18-172.24.4.212:22-172.24.4.1:37368.service - OpenSSH per-connection server daemon (172.24.4.1:37368). May 16 03:50:03.255957 sshd[4799]: Accepted publickey for core from 172.24.4.1 port 37368 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:03.259097 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:03.269323 systemd-logind[1465]: New session 21 of user core. May 16 03:50:03.274552 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 03:50:04.017986 sshd[4801]: Connection closed by 172.24.4.1 port 37368 May 16 03:50:04.018711 sshd-session[4799]: pam_unix(sshd:session): session closed for user core May 16 03:50:04.023674 systemd[1]: sshd@18-172.24.4.212:22-172.24.4.1:37368.service: Deactivated successfully. May 16 03:50:04.027628 systemd[1]: session-21.scope: Deactivated successfully. May 16 03:50:04.029607 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. May 16 03:50:04.031698 systemd-logind[1465]: Removed session 21. May 16 03:50:04.468133 kubelet[2681]: E0516 03:50:04.468023 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:09.032222 systemd[1]: Started sshd@19-172.24.4.212:22-172.24.4.1:56862.service - OpenSSH per-connection server daemon (172.24.4.1:56862). May 16 03:50:09.468226 kubelet[2681]: E0516 03:50:09.468165 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:10.491378 sshd[4822]: Accepted publickey for core from 172.24.4.1 port 56862 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:10.493142 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:10.504267 systemd-logind[1465]: New session 22 of user core. May 16 03:50:10.507501 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 03:50:11.090367 containerd[1487]: time="2025-05-16T03:50:11.089431569Z" level=warning msg="container event discarded" container=3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d type=CONTAINER_CREATED_EVENT May 16 03:50:11.220123 sshd[4824]: Connection closed by 172.24.4.1 port 56862 May 16 03:50:11.221668 sshd-session[4822]: pam_unix(sshd:session): session closed for user core May 16 03:50:11.230906 systemd[1]: sshd@19-172.24.4.212:22-172.24.4.1:56862.service: Deactivated successfully. May 16 03:50:11.238325 systemd[1]: session-22.scope: Deactivated successfully. May 16 03:50:11.241807 containerd[1487]: time="2025-05-16T03:50:11.241665693Z" level=warning msg="container event discarded" container=3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d type=CONTAINER_STARTED_EVENT May 16 03:50:11.242781 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. May 16 03:50:11.246992 systemd-logind[1465]: Removed session 22. May 16 03:50:12.288143 containerd[1487]: time="2025-05-16T03:50:12.287866393Z" level=warning msg="container event discarded" container=2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777 type=CONTAINER_CREATED_EVENT May 16 03:50:12.288143 containerd[1487]: time="2025-05-16T03:50:12.288102345Z" level=warning msg="container event discarded" container=2f3e0c52a1fbae9a1d19caad112d5629a2af9d9a0cd3ab9fb928cd222bdf3777 type=CONTAINER_STARTED_EVENT May 16 03:50:13.076731 containerd[1487]: time="2025-05-16T03:50:13.076641499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"65ed87aa383978c971008761eef259e01aefbab02dc4eb527088936ba89b3eb2\" pid:4849 exited_at:{seconds:1747367413 nanos:67753957}" May 16 03:50:14.070163 containerd[1487]: time="2025-05-16T03:50:14.070075415Z" level=warning msg="container event discarded" container=4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e type=CONTAINER_CREATED_EVENT May 16 03:50:14.070163 containerd[1487]: time="2025-05-16T03:50:14.070132012Z" level=warning msg="container event discarded" container=4a973e1fbcd24ffc9b87a983899be36782c36f39e3082c96c2b0ed4ec3891f6e type=CONTAINER_STARTED_EVENT May 16 03:50:14.468600 kubelet[2681]: E0516 03:50:14.468506 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:16.245965 systemd[1]: Started sshd@20-172.24.4.212:22-172.24.4.1:56312.service - OpenSSH per-connection server daemon (172.24.4.1:56312). May 16 03:50:17.664601 containerd[1487]: time="2025-05-16T03:50:17.664478082Z" level=warning msg="container event discarded" container=c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab type=CONTAINER_CREATED_EVENT May 16 03:50:17.708089 sshd[4862]: Accepted publickey for core from 172.24.4.1 port 56312 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:17.711674 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:17.718259 systemd-logind[1465]: New session 23 of user core. May 16 03:50:17.722726 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 03:50:17.777665 containerd[1487]: time="2025-05-16T03:50:17.777578257Z" level=warning msg="container event discarded" container=c90757f40f205229949c4b12d92c7cc617b8201ffb8ce325a24ecbd51737e2ab type=CONTAINER_STARTED_EVENT May 16 03:50:18.535532 sshd[4864]: Connection closed by 172.24.4.1 port 56312 May 16 03:50:18.536719 sshd-session[4862]: pam_unix(sshd:session): session closed for user core May 16 03:50:18.540667 systemd[1]: sshd@20-172.24.4.212:22-172.24.4.1:56312.service: Deactivated successfully. May 16 03:50:18.543143 systemd[1]: session-23.scope: Deactivated successfully. May 16 03:50:18.545988 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. May 16 03:50:18.547597 systemd-logind[1465]: Removed session 23. May 16 03:50:19.469036 kubelet[2681]: E0516 03:50:19.468984 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:23.553692 systemd[1]: Started sshd@21-172.24.4.212:22-172.24.4.1:41148.service - OpenSSH per-connection server daemon (172.24.4.1:41148). May 16 03:50:24.470125 kubelet[2681]: E0516 03:50:24.470059 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:24.965188 sshd[4879]: Accepted publickey for core from 172.24.4.1 port 41148 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:24.966964 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:24.977639 systemd-logind[1465]: New session 24 of user core. May 16 03:50:24.986595 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 03:50:25.874675 sshd[4881]: Connection closed by 172.24.4.1 port 41148 May 16 03:50:25.876556 sshd-session[4879]: pam_unix(sshd:session): session closed for user core May 16 03:50:25.880809 systemd[1]: sshd@21-172.24.4.212:22-172.24.4.1:41148.service: Deactivated successfully. May 16 03:50:25.891078 systemd[1]: session-24.scope: Deactivated successfully. May 16 03:50:25.899963 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. May 16 03:50:25.904863 systemd-logind[1465]: Removed session 24. May 16 03:50:29.471140 kubelet[2681]: E0516 03:50:29.471078 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:30.891911 systemd[1]: Started sshd@22-172.24.4.212:22-172.24.4.1:41154.service - OpenSSH per-connection server daemon (172.24.4.1:41154). May 16 03:50:32.110370 sshd[4895]: Accepted publickey for core from 172.24.4.1 port 41154 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:32.111905 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:32.119755 systemd-logind[1465]: New session 25 of user core. May 16 03:50:32.126281 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 03:50:32.996577 sshd[4897]: Connection closed by 172.24.4.1 port 41154 May 16 03:50:32.997410 sshd-session[4895]: pam_unix(sshd:session): session closed for user core May 16 03:50:33.000695 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit. May 16 03:50:33.003044 systemd[1]: sshd@22-172.24.4.212:22-172.24.4.1:41154.service: Deactivated successfully. May 16 03:50:33.006763 systemd[1]: session-25.scope: Deactivated successfully. May 16 03:50:33.009258 systemd-logind[1465]: Removed session 25. May 16 03:50:34.471917 kubelet[2681]: E0516 03:50:34.471808 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:38.044161 systemd[1]: Started sshd@23-172.24.4.212:22-172.24.4.1:38008.service - OpenSSH per-connection server daemon (172.24.4.1:38008). May 16 03:50:39.167785 sshd[4918]: Accepted publickey for core from 172.24.4.1 port 38008 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:39.169878 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:39.178618 systemd-logind[1465]: New session 26 of user core. May 16 03:50:39.184494 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 03:50:39.473217 kubelet[2681]: E0516 03:50:39.472692 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:40.023885 sshd[4920]: Connection closed by 172.24.4.1 port 38008 May 16 03:50:40.024651 sshd-session[4918]: pam_unix(sshd:session): session closed for user core May 16 03:50:40.029120 systemd[1]: sshd@23-172.24.4.212:22-172.24.4.1:38008.service: Deactivated successfully. May 16 03:50:40.032674 systemd[1]: session-26.scope: Deactivated successfully. May 16 03:50:40.035601 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit. May 16 03:50:40.037107 systemd-logind[1465]: Removed session 26. May 16 03:50:43.180469 containerd[1487]: time="2025-05-16T03:50:43.180260951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"99f46ccb4257f64e7a6e39285220429c37dc8848eecaf762a26052cafa4d7740\" pid:4945 exited_at:{seconds:1747367443 nanos:179203080}" May 16 03:50:44.474259 kubelet[2681]: E0516 03:50:44.474174 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:45.045059 systemd[1]: Started sshd@24-172.24.4.212:22-172.24.4.1:41608.service - OpenSSH per-connection server daemon (172.24.4.1:41608). May 16 03:50:46.167377 sshd[4959]: Accepted publickey for core from 172.24.4.1 port 41608 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:46.169654 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:46.180045 systemd-logind[1465]: New session 27 of user core. May 16 03:50:46.188782 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 03:50:46.979701 sshd[4961]: Connection closed by 172.24.4.1 port 41608 May 16 03:50:46.979576 sshd-session[4959]: pam_unix(sshd:session): session closed for user core May 16 03:50:46.991106 systemd[1]: sshd@24-172.24.4.212:22-172.24.4.1:41608.service: Deactivated successfully. May 16 03:50:46.998331 systemd[1]: session-27.scope: Deactivated successfully. May 16 03:50:47.005878 systemd-logind[1465]: Session 27 logged out. Waiting for processes to exit. May 16 03:50:47.010823 systemd-logind[1465]: Removed session 27. May 16 03:50:49.474465 kubelet[2681]: E0516 03:50:49.474349 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:51.997557 systemd[1]: Started sshd@25-172.24.4.212:22-172.24.4.1:41620.service - OpenSSH per-connection server daemon (172.24.4.1:41620). May 16 03:50:53.140428 sshd[4974]: Accepted publickey for core from 172.24.4.1 port 41620 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:50:53.142455 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:50:53.157705 systemd-logind[1465]: New session 28 of user core. May 16 03:50:53.168181 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 03:50:53.954128 sshd[4976]: Connection closed by 172.24.4.1 port 41620 May 16 03:50:53.955931 sshd-session[4974]: pam_unix(sshd:session): session closed for user core May 16 03:50:53.965933 systemd[1]: sshd@25-172.24.4.212:22-172.24.4.1:41620.service: Deactivated successfully. May 16 03:50:53.971409 systemd[1]: session-28.scope: Deactivated successfully. May 16 03:50:53.973756 systemd-logind[1465]: Session 28 logged out. Waiting for processes to exit. May 16 03:50:53.979525 systemd-logind[1465]: Removed session 28. May 16 03:50:54.474922 kubelet[2681]: E0516 03:50:54.474873 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:50:58.971381 systemd[1]: Started sshd@26-172.24.4.212:22-172.24.4.1:45774.service - OpenSSH per-connection server daemon (172.24.4.1:45774). May 16 03:50:59.475734 kubelet[2681]: E0516 03:50:59.475639 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:00.255591 sshd[4989]: Accepted publickey for core from 172.24.4.1 port 45774 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:00.257683 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:00.264457 systemd-logind[1465]: New session 29 of user core. May 16 03:51:00.272494 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 03:51:01.123987 sshd[4993]: Connection closed by 172.24.4.1 port 45774 May 16 03:51:01.125491 sshd-session[4989]: pam_unix(sshd:session): session closed for user core May 16 03:51:01.131730 systemd-logind[1465]: Session 29 logged out. Waiting for processes to exit. May 16 03:51:01.132778 systemd[1]: sshd@26-172.24.4.212:22-172.24.4.1:45774.service: Deactivated successfully. May 16 03:51:01.136039 systemd[1]: session-29.scope: Deactivated successfully. May 16 03:51:01.137737 systemd-logind[1465]: Removed session 29. May 16 03:51:04.476707 kubelet[2681]: E0516 03:51:04.476601 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:06.140937 systemd[1]: Started sshd@27-172.24.4.212:22-172.24.4.1:34500.service - OpenSSH per-connection server daemon (172.24.4.1:34500). May 16 03:51:07.418114 sshd[5006]: Accepted publickey for core from 172.24.4.1 port 34500 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:07.420801 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:07.430053 systemd-logind[1465]: New session 30 of user core. May 16 03:51:07.437593 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 03:51:08.100276 sshd[5008]: Connection closed by 172.24.4.1 port 34500 May 16 03:51:08.100908 sshd-session[5006]: pam_unix(sshd:session): session closed for user core May 16 03:51:08.105956 systemd-logind[1465]: Session 30 logged out. Waiting for processes to exit. May 16 03:51:08.106899 systemd[1]: sshd@27-172.24.4.212:22-172.24.4.1:34500.service: Deactivated successfully. May 16 03:51:08.109830 systemd[1]: session-30.scope: Deactivated successfully. May 16 03:51:08.111429 systemd-logind[1465]: Removed session 30. May 16 03:51:09.477579 kubelet[2681]: E0516 03:51:09.477522 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:13.021315 containerd[1487]: time="2025-05-16T03:51:13.021262066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"a914eae7887707c45c1984bd6e4648a822e9f78cf641c84ef25e7e4c663e91c2\" pid:5033 exited_at:{seconds:1747367473 nanos:20678212}" May 16 03:51:13.115924 systemd[1]: Started sshd@28-172.24.4.212:22-172.24.4.1:34504.service - OpenSSH per-connection server daemon (172.24.4.1:34504). May 16 03:51:14.123694 sshd[5045]: Accepted publickey for core from 172.24.4.1 port 34504 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:14.125816 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:14.131175 systemd-logind[1465]: New session 31 of user core. May 16 03:51:14.137903 systemd[1]: Started session-31.scope - Session 31 of User core. May 16 03:51:14.477862 kubelet[2681]: E0516 03:51:14.477812 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:14.847457 sshd[5047]: Connection closed by 172.24.4.1 port 34504 May 16 03:51:14.851199 sshd-session[5045]: pam_unix(sshd:session): session closed for user core May 16 03:51:14.857087 systemd[1]: sshd@28-172.24.4.212:22-172.24.4.1:34504.service: Deactivated successfully. May 16 03:51:14.860185 systemd[1]: session-31.scope: Deactivated successfully. May 16 03:51:14.864828 systemd-logind[1465]: Session 31 logged out. Waiting for processes to exit. May 16 03:51:14.867237 systemd-logind[1465]: Removed session 31. May 16 03:51:19.479389 kubelet[2681]: E0516 03:51:19.478842 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:19.864674 systemd[1]: Started sshd@29-172.24.4.212:22-172.24.4.1:50074.service - OpenSSH per-connection server daemon (172.24.4.1:50074). May 16 03:51:21.007792 sshd[5060]: Accepted publickey for core from 172.24.4.1 port 50074 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:21.009909 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:21.017656 systemd-logind[1465]: New session 32 of user core. May 16 03:51:21.023515 systemd[1]: Started session-32.scope - Session 32 of User core. May 16 03:51:21.697364 sshd[5062]: Connection closed by 172.24.4.1 port 50074 May 16 03:51:21.697158 sshd-session[5060]: pam_unix(sshd:session): session closed for user core May 16 03:51:21.701799 systemd-logind[1465]: Session 32 logged out. Waiting for processes to exit. May 16 03:51:21.702048 systemd[1]: sshd@29-172.24.4.212:22-172.24.4.1:50074.service: Deactivated successfully. May 16 03:51:21.706553 systemd[1]: session-32.scope: Deactivated successfully. May 16 03:51:21.709696 systemd-logind[1465]: Removed session 32. May 16 03:51:24.479595 kubelet[2681]: E0516 03:51:24.479517 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:26.709538 systemd[1]: Started sshd@30-172.24.4.212:22-172.24.4.1:36356.service - OpenSSH per-connection server daemon (172.24.4.1:36356). May 16 03:51:27.677894 kubelet[2681]: E0516 03:51:27.677454 2681 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:51:27.677894 kubelet[2681]: E0516 03:51:27.677745 2681 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:51:27.859003 sshd[5077]: Accepted publickey for core from 172.24.4.1 port 36356 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:27.861243 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:27.868897 systemd-logind[1465]: New session 33 of user core. May 16 03:51:27.874992 systemd[1]: Started session-33.scope - Session 33 of User core. May 16 03:51:28.589611 sshd[5079]: Connection closed by 172.24.4.1 port 36356 May 16 03:51:28.590036 sshd-session[5077]: pam_unix(sshd:session): session closed for user core May 16 03:51:28.598081 systemd[1]: sshd@30-172.24.4.212:22-172.24.4.1:36356.service: Deactivated successfully. May 16 03:51:28.601996 systemd[1]: session-33.scope: Deactivated successfully. May 16 03:51:28.603281 systemd-logind[1465]: Session 33 logged out. Waiting for processes to exit. May 16 03:51:28.604981 systemd-logind[1465]: Removed session 33. May 16 03:51:29.480691 kubelet[2681]: E0516 03:51:29.480634 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:33.606452 systemd[1]: Started sshd@31-172.24.4.212:22-172.24.4.1:34208.service - OpenSSH per-connection server daemon (172.24.4.1:34208). May 16 03:51:34.481549 kubelet[2681]: E0516 03:51:34.481484 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:34.829448 sshd[5102]: Accepted publickey for core from 172.24.4.1 port 34208 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:34.829764 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:34.837395 systemd-logind[1465]: New session 34 of user core. May 16 03:51:34.840538 systemd[1]: Started session-34.scope - Session 34 of User core. May 16 03:51:35.518022 sshd[5104]: Connection closed by 172.24.4.1 port 34208 May 16 03:51:35.519305 sshd-session[5102]: pam_unix(sshd:session): session closed for user core May 16 03:51:35.522581 systemd-logind[1465]: Session 34 logged out. Waiting for processes to exit. May 16 03:51:35.523609 systemd[1]: sshd@31-172.24.4.212:22-172.24.4.1:34208.service: Deactivated successfully. May 16 03:51:35.529607 systemd[1]: session-34.scope: Deactivated successfully. May 16 03:51:35.534746 systemd-logind[1465]: Removed session 34. May 16 03:51:39.482239 kubelet[2681]: E0516 03:51:39.482128 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:40.537571 systemd[1]: Started sshd@32-172.24.4.212:22-172.24.4.1:34212.service - OpenSSH per-connection server daemon (172.24.4.1:34212). May 16 03:51:41.852948 sshd[5117]: Accepted publickey for core from 172.24.4.1 port 34212 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:41.854411 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:41.864018 systemd-logind[1465]: New session 35 of user core. May 16 03:51:41.868495 systemd[1]: Started session-35.scope - Session 35 of User core. May 16 03:51:42.582570 sshd[5127]: Connection closed by 172.24.4.1 port 34212 May 16 03:51:42.583207 sshd-session[5117]: pam_unix(sshd:session): session closed for user core May 16 03:51:42.587021 systemd[1]: sshd@32-172.24.4.212:22-172.24.4.1:34212.service: Deactivated successfully. May 16 03:51:42.590219 systemd[1]: session-35.scope: Deactivated successfully. May 16 03:51:42.591207 systemd-logind[1465]: Session 35 logged out. Waiting for processes to exit. May 16 03:51:42.592457 systemd-logind[1465]: Removed session 35. May 16 03:51:43.087106 containerd[1487]: time="2025-05-16T03:51:43.085265807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"5c05dba2a41cb96d628e96fd5c6f112ecf613408724849080f24593f72e6cd44\" pid:5152 exited_at:{seconds:1747367503 nanos:84572549}" May 16 03:51:44.482536 kubelet[2681]: E0516 03:51:44.482486 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:47.601241 systemd[1]: Started sshd@33-172.24.4.212:22-172.24.4.1:60738.service - OpenSSH per-connection server daemon (172.24.4.1:60738). May 16 03:51:48.784405 sshd[5165]: Accepted publickey for core from 172.24.4.1 port 60738 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:48.786668 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:48.794739 systemd-logind[1465]: New session 36 of user core. May 16 03:51:48.799557 systemd[1]: Started session-36.scope - Session 36 of User core. May 16 03:51:49.483672 kubelet[2681]: E0516 03:51:49.483620 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:49.553969 sshd[5167]: Connection closed by 172.24.4.1 port 60738 May 16 03:51:49.554946 sshd-session[5165]: pam_unix(sshd:session): session closed for user core May 16 03:51:49.559634 systemd[1]: sshd@33-172.24.4.212:22-172.24.4.1:60738.service: Deactivated successfully. May 16 03:51:49.563906 systemd[1]: session-36.scope: Deactivated successfully. May 16 03:51:49.565934 systemd-logind[1465]: Session 36 logged out. Waiting for processes to exit. May 16 03:51:49.567785 systemd-logind[1465]: Removed session 36. May 16 03:51:54.484822 kubelet[2681]: E0516 03:51:54.484771 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:51:54.574875 systemd[1]: Started sshd@34-172.24.4.212:22-172.24.4.1:59850.service - OpenSSH per-connection server daemon (172.24.4.1:59850). May 16 03:51:55.814602 sshd[5180]: Accepted publickey for core from 172.24.4.1 port 59850 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:51:55.816308 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:51:55.825956 systemd-logind[1465]: New session 37 of user core. May 16 03:51:55.832518 systemd[1]: Started session-37.scope - Session 37 of User core. May 16 03:51:56.579137 sshd[5182]: Connection closed by 172.24.4.1 port 59850 May 16 03:51:56.580448 sshd-session[5180]: pam_unix(sshd:session): session closed for user core May 16 03:51:56.583593 systemd[1]: sshd@34-172.24.4.212:22-172.24.4.1:59850.service: Deactivated successfully. May 16 03:51:56.586600 systemd[1]: session-37.scope: Deactivated successfully. May 16 03:51:56.589198 systemd-logind[1465]: Session 37 logged out. Waiting for processes to exit. May 16 03:51:56.591295 systemd-logind[1465]: Removed session 37. May 16 03:51:59.485882 kubelet[2681]: E0516 03:51:59.485766 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:01.590961 systemd[1]: Started sshd@35-172.24.4.212:22-172.24.4.1:59854.service - OpenSSH per-connection server daemon (172.24.4.1:59854). May 16 03:52:02.777559 sshd[5198]: Accepted publickey for core from 172.24.4.1 port 59854 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:02.780959 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:02.798527 systemd-logind[1465]: New session 38 of user core. May 16 03:52:02.803666 systemd[1]: Started session-38.scope - Session 38 of User core. May 16 03:52:03.645307 sshd[5200]: Connection closed by 172.24.4.1 port 59854 May 16 03:52:03.648775 sshd-session[5198]: pam_unix(sshd:session): session closed for user core May 16 03:52:03.656753 systemd[1]: sshd@35-172.24.4.212:22-172.24.4.1:59854.service: Deactivated successfully. May 16 03:52:03.661457 systemd[1]: session-38.scope: Deactivated successfully. May 16 03:52:03.665527 systemd-logind[1465]: Session 38 logged out. Waiting for processes to exit. May 16 03:52:03.669506 systemd-logind[1465]: Removed session 38. May 16 03:52:04.487057 kubelet[2681]: E0516 03:52:04.486989 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:08.659723 systemd[1]: Started sshd@36-172.24.4.212:22-172.24.4.1:56336.service - OpenSSH per-connection server daemon (172.24.4.1:56336). May 16 03:52:09.487647 kubelet[2681]: E0516 03:52:09.487587 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:09.804421 sshd[5213]: Accepted publickey for core from 172.24.4.1 port 56336 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:09.805621 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:09.811550 systemd-logind[1465]: New session 39 of user core. May 16 03:52:09.816717 systemd[1]: Started session-39.scope - Session 39 of User core. May 16 03:52:10.492463 sshd[5215]: Connection closed by 172.24.4.1 port 56336 May 16 03:52:10.494606 sshd-session[5213]: pam_unix(sshd:session): session closed for user core May 16 03:52:10.500249 systemd[1]: sshd@36-172.24.4.212:22-172.24.4.1:56336.service: Deactivated successfully. May 16 03:52:10.502652 systemd[1]: session-39.scope: Deactivated successfully. May 16 03:52:10.503631 systemd-logind[1465]: Session 39 logged out. Waiting for processes to exit. May 16 03:52:10.505081 systemd-logind[1465]: Removed session 39. May 16 03:52:13.024678 containerd[1487]: time="2025-05-16T03:52:13.024557671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"b62151676d6ae11d09b95099eab89f20b49647bf0ef21fcf3cf8e5127b7758c6\" pid:5240 exited_at:{seconds:1747367533 nanos:23764220}" May 16 03:52:14.488543 kubelet[2681]: E0516 03:52:14.488325 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:15.518916 systemd[1]: Started sshd@37-172.24.4.212:22-172.24.4.1:53420.service - OpenSSH per-connection server daemon (172.24.4.1:53420). May 16 03:52:16.803597 sshd[5252]: Accepted publickey for core from 172.24.4.1 port 53420 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:16.805132 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:16.813502 systemd-logind[1465]: New session 40 of user core. May 16 03:52:16.816522 systemd[1]: Started session-40.scope - Session 40 of User core. May 16 03:52:17.573565 sshd[5254]: Connection closed by 172.24.4.1 port 53420 May 16 03:52:17.575202 sshd-session[5252]: pam_unix(sshd:session): session closed for user core May 16 03:52:17.581142 systemd[1]: sshd@37-172.24.4.212:22-172.24.4.1:53420.service: Deactivated successfully. May 16 03:52:17.585765 systemd[1]: session-40.scope: Deactivated successfully. May 16 03:52:17.588104 systemd-logind[1465]: Session 40 logged out. Waiting for processes to exit. May 16 03:52:17.590046 systemd-logind[1465]: Removed session 40. May 16 03:52:19.489007 kubelet[2681]: E0516 03:52:19.488917 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:22.588277 systemd[1]: Started sshd@38-172.24.4.212:22-172.24.4.1:53426.service - OpenSSH per-connection server daemon (172.24.4.1:53426). May 16 03:52:23.775926 sshd[5270]: Accepted publickey for core from 172.24.4.1 port 53426 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:23.778143 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:23.786015 systemd-logind[1465]: New session 41 of user core. May 16 03:52:23.789493 systemd[1]: Started session-41.scope - Session 41 of User core. May 16 03:52:24.489861 kubelet[2681]: E0516 03:52:24.489800 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:24.505382 sshd[5272]: Connection closed by 172.24.4.1 port 53426 May 16 03:52:24.506920 sshd-session[5270]: pam_unix(sshd:session): session closed for user core May 16 03:52:24.512982 systemd[1]: sshd@38-172.24.4.212:22-172.24.4.1:53426.service: Deactivated successfully. May 16 03:52:24.516302 systemd[1]: session-41.scope: Deactivated successfully. May 16 03:52:24.517898 systemd-logind[1465]: Session 41 logged out. Waiting for processes to exit. May 16 03:52:24.519572 systemd-logind[1465]: Removed session 41. May 16 03:52:29.490920 kubelet[2681]: E0516 03:52:29.490849 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:29.527362 systemd[1]: Started sshd@39-172.24.4.212:22-172.24.4.1:44844.service - OpenSSH per-connection server daemon (172.24.4.1:44844). May 16 03:52:30.763382 sshd[5287]: Accepted publickey for core from 172.24.4.1 port 44844 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:30.766187 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:30.782549 systemd-logind[1465]: New session 42 of user core. May 16 03:52:30.785598 systemd[1]: Started session-42.scope - Session 42 of User core. May 16 03:52:31.492636 sshd[5290]: Connection closed by 172.24.4.1 port 44844 May 16 03:52:31.494519 sshd-session[5287]: pam_unix(sshd:session): session closed for user core May 16 03:52:31.501880 systemd[1]: sshd@39-172.24.4.212:22-172.24.4.1:44844.service: Deactivated successfully. May 16 03:52:31.505785 systemd[1]: session-42.scope: Deactivated successfully. May 16 03:52:31.508519 systemd-logind[1465]: Session 42 logged out. Waiting for processes to exit. May 16 03:52:31.512034 systemd-logind[1465]: Removed session 42. May 16 03:52:34.492007 kubelet[2681]: E0516 03:52:34.491842 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:36.536770 systemd[1]: Started sshd@40-172.24.4.212:22-172.24.4.1:33668.service - OpenSSH per-connection server daemon (172.24.4.1:33668). May 16 03:52:37.698041 sshd[5303]: Accepted publickey for core from 172.24.4.1 port 33668 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:37.700217 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:37.708935 systemd-logind[1465]: New session 43 of user core. May 16 03:52:37.714683 systemd[1]: Started session-43.scope - Session 43 of User core. May 16 03:52:38.605928 sshd[5305]: Connection closed by 172.24.4.1 port 33668 May 16 03:52:38.605670 sshd-session[5303]: pam_unix(sshd:session): session closed for user core May 16 03:52:38.610089 systemd-logind[1465]: Session 43 logged out. Waiting for processes to exit. May 16 03:52:38.610973 systemd[1]: sshd@40-172.24.4.212:22-172.24.4.1:33668.service: Deactivated successfully. May 16 03:52:38.613936 systemd[1]: session-43.scope: Deactivated successfully. May 16 03:52:38.616807 systemd-logind[1465]: Removed session 43. May 16 03:52:39.492025 kubelet[2681]: E0516 03:52:39.491979 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:43.208092 containerd[1487]: time="2025-05-16T03:52:43.207938848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"1411b51ff242e14716c5a0501f40fa1f6f220abae274ea723a2b541c1308f338\" pid:5329 exited_at:{seconds:1747367563 nanos:207186776}" May 16 03:52:43.624696 systemd[1]: Started sshd@41-172.24.4.212:22-172.24.4.1:42690.service - OpenSSH per-connection server daemon (172.24.4.1:42690). May 16 03:52:44.493288 kubelet[2681]: E0516 03:52:44.493190 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:44.809662 sshd[5341]: Accepted publickey for core from 172.24.4.1 port 42690 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:44.811217 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:44.819730 systemd-logind[1465]: New session 44 of user core. May 16 03:52:44.822734 systemd[1]: Started session-44.scope - Session 44 of User core. May 16 03:52:45.577394 sshd[5344]: Connection closed by 172.24.4.1 port 42690 May 16 03:52:45.577899 sshd-session[5341]: pam_unix(sshd:session): session closed for user core May 16 03:52:45.582913 systemd[1]: sshd@41-172.24.4.212:22-172.24.4.1:42690.service: Deactivated successfully. May 16 03:52:45.590423 systemd[1]: session-44.scope: Deactivated successfully. May 16 03:52:45.592982 systemd-logind[1465]: Session 44 logged out. Waiting for processes to exit. May 16 03:52:45.594821 systemd-logind[1465]: Removed session 44. May 16 03:52:49.494255 kubelet[2681]: E0516 03:52:49.494191 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:50.593638 systemd[1]: Started sshd@42-172.24.4.212:22-172.24.4.1:42692.service - OpenSSH per-connection server daemon (172.24.4.1:42692). May 16 03:52:51.833305 sshd[5358]: Accepted publickey for core from 172.24.4.1 port 42692 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:51.837653 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:51.853500 systemd-logind[1465]: New session 45 of user core. May 16 03:52:51.858832 systemd[1]: Started session-45.scope - Session 45 of User core. May 16 03:52:52.740049 sshd[5360]: Connection closed by 172.24.4.1 port 42692 May 16 03:52:52.740682 sshd-session[5358]: pam_unix(sshd:session): session closed for user core May 16 03:52:52.750560 systemd[1]: sshd@42-172.24.4.212:22-172.24.4.1:42692.service: Deactivated successfully. May 16 03:52:52.761643 systemd[1]: session-45.scope: Deactivated successfully. May 16 03:52:52.766839 systemd-logind[1465]: Session 45 logged out. Waiting for processes to exit. May 16 03:52:52.770996 systemd-logind[1465]: Removed session 45. May 16 03:52:54.494316 kubelet[2681]: E0516 03:52:54.494267 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:57.758378 systemd[1]: Started sshd@43-172.24.4.212:22-172.24.4.1:38960.service - OpenSSH per-connection server daemon (172.24.4.1:38960). May 16 03:52:59.051956 sshd[5373]: Accepted publickey for core from 172.24.4.1 port 38960 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:52:59.055035 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:52:59.072674 systemd-logind[1465]: New session 46 of user core. May 16 03:52:59.078717 systemd[1]: Started session-46.scope - Session 46 of User core. May 16 03:52:59.494471 kubelet[2681]: E0516 03:52:59.494407 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:52:59.651767 sshd[5377]: Connection closed by 172.24.4.1 port 38960 May 16 03:52:59.651651 sshd-session[5373]: pam_unix(sshd:session): session closed for user core May 16 03:52:59.657282 systemd[1]: sshd@43-172.24.4.212:22-172.24.4.1:38960.service: Deactivated successfully. May 16 03:52:59.657516 systemd-logind[1465]: Session 46 logged out. Waiting for processes to exit. May 16 03:52:59.663228 systemd[1]: session-46.scope: Deactivated successfully. May 16 03:52:59.667891 systemd-logind[1465]: Removed session 46. May 16 03:53:04.495278 kubelet[2681]: E0516 03:53:04.495159 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:04.669652 systemd[1]: Started sshd@44-172.24.4.212:22-172.24.4.1:42314.service - OpenSSH per-connection server daemon (172.24.4.1:42314). May 16 03:53:05.811382 sshd[5390]: Accepted publickey for core from 172.24.4.1 port 42314 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:05.813265 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:05.821944 systemd-logind[1465]: New session 47 of user core. May 16 03:53:05.828589 systemd[1]: Started session-47.scope - Session 47 of User core. May 16 03:53:06.547594 sshd[5392]: Connection closed by 172.24.4.1 port 42314 May 16 03:53:06.550252 sshd-session[5390]: pam_unix(sshd:session): session closed for user core May 16 03:53:06.563631 systemd[1]: sshd@44-172.24.4.212:22-172.24.4.1:42314.service: Deactivated successfully. May 16 03:53:06.566174 systemd[1]: session-47.scope: Deactivated successfully. May 16 03:53:06.569012 systemd-logind[1465]: Session 47 logged out. Waiting for processes to exit. May 16 03:53:06.571587 systemd[1]: Started sshd@45-172.24.4.212:22-172.24.4.1:42326.service - OpenSSH per-connection server daemon (172.24.4.1:42326). May 16 03:53:06.573379 systemd-logind[1465]: Removed session 47. May 16 03:53:07.847424 sshd[5404]: Accepted publickey for core from 172.24.4.1 port 42326 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:07.848862 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:07.856801 systemd-logind[1465]: New session 48 of user core. May 16 03:53:07.861471 systemd[1]: Started session-48.scope - Session 48 of User core. May 16 03:53:08.756966 sshd[5407]: Connection closed by 172.24.4.1 port 42326 May 16 03:53:08.756944 sshd-session[5404]: pam_unix(sshd:session): session closed for user core May 16 03:53:08.768385 systemd[1]: sshd@45-172.24.4.212:22-172.24.4.1:42326.service: Deactivated successfully. May 16 03:53:08.770208 systemd[1]: session-48.scope: Deactivated successfully. May 16 03:53:08.772315 systemd-logind[1465]: Session 48 logged out. Waiting for processes to exit. May 16 03:53:08.774611 systemd[1]: Started sshd@46-172.24.4.212:22-172.24.4.1:42342.service - OpenSSH per-connection server daemon (172.24.4.1:42342). May 16 03:53:08.780299 systemd-logind[1465]: Removed session 48. May 16 03:53:09.496373 kubelet[2681]: E0516 03:53:09.496304 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:09.894439 sshd[5416]: Accepted publickey for core from 172.24.4.1 port 42342 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:09.897925 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:09.912006 systemd-logind[1465]: New session 49 of user core. May 16 03:53:09.917723 systemd[1]: Started session-49.scope - Session 49 of User core. May 16 03:53:10.623745 sshd[5419]: Connection closed by 172.24.4.1 port 42342 May 16 03:53:10.624570 sshd-session[5416]: pam_unix(sshd:session): session closed for user core May 16 03:53:10.628660 systemd-logind[1465]: Session 49 logged out. Waiting for processes to exit. May 16 03:53:10.629766 systemd[1]: sshd@46-172.24.4.212:22-172.24.4.1:42342.service: Deactivated successfully. May 16 03:53:10.633132 systemd[1]: session-49.scope: Deactivated successfully. May 16 03:53:10.636823 systemd-logind[1465]: Removed session 49. May 16 03:53:13.042195 containerd[1487]: time="2025-05-16T03:53:13.042085868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"bf9832c96a13655fd27ca972c44854aec860b2d90925f7f6d1339ecc1dfb71ce\" pid:5448 exited_at:{seconds:1747367593 nanos:40288455}" May 16 03:53:14.496772 kubelet[2681]: E0516 03:53:14.496722 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:15.645775 systemd[1]: Started sshd@47-172.24.4.212:22-172.24.4.1:59886.service - OpenSSH per-connection server daemon (172.24.4.1:59886). May 16 03:53:16.919059 sshd[5463]: Accepted publickey for core from 172.24.4.1 port 59886 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:16.919662 sshd-session[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:16.929414 systemd-logind[1465]: New session 50 of user core. May 16 03:53:16.932136 systemd[1]: Started session-50.scope - Session 50 of User core. May 16 03:53:17.553305 sshd[5473]: Connection closed by 172.24.4.1 port 59886 May 16 03:53:17.555037 sshd-session[5463]: pam_unix(sshd:session): session closed for user core May 16 03:53:17.565570 systemd[1]: sshd@47-172.24.4.212:22-172.24.4.1:59886.service: Deactivated successfully. May 16 03:53:17.567498 systemd[1]: session-50.scope: Deactivated successfully. May 16 03:53:17.571408 systemd-logind[1465]: Session 50 logged out. Waiting for processes to exit. May 16 03:53:17.580086 systemd[1]: Started sshd@48-172.24.4.212:22-172.24.4.1:59890.service - OpenSSH per-connection server daemon (172.24.4.1:59890). May 16 03:53:17.583508 systemd-logind[1465]: Removed session 50. May 16 03:53:18.809499 sshd[5486]: Accepted publickey for core from 172.24.4.1 port 59890 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:18.813544 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:18.822580 systemd-logind[1465]: New session 51 of user core. May 16 03:53:18.827547 systemd[1]: Started session-51.scope - Session 51 of User core. May 16 03:53:19.496936 kubelet[2681]: E0516 03:53:19.496862 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:19.884544 sshd[5489]: Connection closed by 172.24.4.1 port 59890 May 16 03:53:19.884277 sshd-session[5486]: pam_unix(sshd:session): session closed for user core May 16 03:53:19.898172 systemd[1]: sshd@48-172.24.4.212:22-172.24.4.1:59890.service: Deactivated successfully. May 16 03:53:19.901490 systemd[1]: session-51.scope: Deactivated successfully. May 16 03:53:19.904650 systemd-logind[1465]: Session 51 logged out. Waiting for processes to exit. May 16 03:53:19.907445 systemd[1]: Started sshd@49-172.24.4.212:22-172.24.4.1:59894.service - OpenSSH per-connection server daemon (172.24.4.1:59894). May 16 03:53:19.912708 systemd-logind[1465]: Removed session 51. May 16 03:53:21.086063 sshd[5500]: Accepted publickey for core from 172.24.4.1 port 59894 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:21.087421 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:21.095727 systemd-logind[1465]: New session 52 of user core. May 16 03:53:21.106657 systemd[1]: Started session-52.scope - Session 52 of User core. May 16 03:53:23.193460 sshd[5503]: Connection closed by 172.24.4.1 port 59894 May 16 03:53:23.195854 sshd-session[5500]: pam_unix(sshd:session): session closed for user core May 16 03:53:23.217628 systemd[1]: sshd@49-172.24.4.212:22-172.24.4.1:59894.service: Deactivated successfully. May 16 03:53:23.225020 systemd[1]: session-52.scope: Deactivated successfully. May 16 03:53:23.227730 systemd-logind[1465]: Session 52 logged out. Waiting for processes to exit. May 16 03:53:23.231941 systemd-logind[1465]: Removed session 52. May 16 03:53:23.236727 systemd[1]: Started sshd@50-172.24.4.212:22-172.24.4.1:59906.service - OpenSSH per-connection server daemon (172.24.4.1:59906). May 16 03:53:24.421758 sshd[5522]: Accepted publickey for core from 172.24.4.1 port 59906 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:24.424698 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:24.443169 systemd-logind[1465]: New session 53 of user core. May 16 03:53:24.447773 systemd[1]: Started session-53.scope - Session 53 of User core. May 16 03:53:24.497718 kubelet[2681]: E0516 03:53:24.497639 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:25.440750 sshd[5525]: Connection closed by 172.24.4.1 port 59906 May 16 03:53:25.440555 sshd-session[5522]: pam_unix(sshd:session): session closed for user core May 16 03:53:25.453142 systemd[1]: sshd@50-172.24.4.212:22-172.24.4.1:59906.service: Deactivated successfully. May 16 03:53:25.456474 systemd[1]: session-53.scope: Deactivated successfully. May 16 03:53:25.457935 systemd-logind[1465]: Session 53 logged out. Waiting for processes to exit. May 16 03:53:25.461603 systemd[1]: Started sshd@51-172.24.4.212:22-172.24.4.1:46922.service - OpenSSH per-connection server daemon (172.24.4.1:46922). May 16 03:53:25.463538 systemd-logind[1465]: Removed session 53. May 16 03:53:26.704149 sshd[5535]: Accepted publickey for core from 172.24.4.1 port 46922 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:26.709851 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:26.727463 systemd-logind[1465]: New session 54 of user core. May 16 03:53:26.735266 systemd[1]: Started session-54.scope - Session 54 of User core. May 16 03:53:27.435237 sshd[5538]: Connection closed by 172.24.4.1 port 46922 May 16 03:53:27.435987 sshd-session[5535]: pam_unix(sshd:session): session closed for user core May 16 03:53:27.441501 systemd-logind[1465]: Session 54 logged out. Waiting for processes to exit. May 16 03:53:27.442416 systemd[1]: sshd@51-172.24.4.212:22-172.24.4.1:46922.service: Deactivated successfully. May 16 03:53:27.444915 systemd[1]: session-54.scope: Deactivated successfully. May 16 03:53:27.447088 systemd-logind[1465]: Removed session 54. May 16 03:53:29.498352 kubelet[2681]: E0516 03:53:29.498274 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:32.460103 systemd[1]: Started sshd@52-172.24.4.212:22-172.24.4.1:46932.service - OpenSSH per-connection server daemon (172.24.4.1:46932). May 16 03:53:32.680008 kubelet[2681]: E0516 03:53:32.679858 2681 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:53:32.680008 kubelet[2681]: E0516 03:53:32.679954 2681 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 16 03:53:33.798679 sshd[5554]: Accepted publickey for core from 172.24.4.1 port 46932 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:33.800220 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:33.807382 systemd-logind[1465]: New session 55 of user core. May 16 03:53:33.814498 systemd[1]: Started session-55.scope - Session 55 of User core. May 16 03:53:34.495320 sshd[5556]: Connection closed by 172.24.4.1 port 46932 May 16 03:53:34.497237 sshd-session[5554]: pam_unix(sshd:session): session closed for user core May 16 03:53:34.499140 kubelet[2681]: E0516 03:53:34.499067 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:34.500678 systemd[1]: sshd@52-172.24.4.212:22-172.24.4.1:46932.service: Deactivated successfully. May 16 03:53:34.502997 systemd[1]: session-55.scope: Deactivated successfully. May 16 03:53:34.505062 systemd-logind[1465]: Session 55 logged out. Waiting for processes to exit. May 16 03:53:34.508316 systemd-logind[1465]: Removed session 55. May 16 03:53:39.500140 kubelet[2681]: E0516 03:53:39.500065 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:39.520008 systemd[1]: Started sshd@53-172.24.4.212:22-172.24.4.1:57680.service - OpenSSH per-connection server daemon (172.24.4.1:57680). May 16 03:53:40.760557 sshd[5576]: Accepted publickey for core from 172.24.4.1 port 57680 ssh2: RSA SHA256:owno7cXPe7mCZ8El09DavZ3D/t1OlRFkGW4Z9BoK2co May 16 03:53:40.763882 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 03:53:40.779035 systemd-logind[1465]: New session 56 of user core. May 16 03:53:40.786558 systemd[1]: Started session-56.scope - Session 56 of User core. May 16 03:53:41.627846 sshd[5578]: Connection closed by 172.24.4.1 port 57680 May 16 03:53:41.630060 sshd-session[5576]: pam_unix(sshd:session): session closed for user core May 16 03:53:41.634581 systemd-logind[1465]: Session 56 logged out. Waiting for processes to exit. May 16 03:53:41.635451 systemd[1]: sshd@53-172.24.4.212:22-172.24.4.1:57680.service: Deactivated successfully. May 16 03:53:41.638748 systemd[1]: session-56.scope: Deactivated successfully. May 16 03:53:41.640306 systemd-logind[1465]: Removed session 56. May 16 03:53:43.155921 containerd[1487]: time="2025-05-16T03:53:43.154224979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"9654e0084ee2842ad1917e1572375190431d8d9fb8bd8a28fba2779ab9b155a2\" pid:5603 exited_at:{seconds:1747367623 nanos:152567589}" May 16 03:53:44.500988 kubelet[2681]: E0516 03:53:44.500926 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:49.501531 kubelet[2681]: E0516 03:53:49.501477 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:54.501731 kubelet[2681]: E0516 03:53:54.501673 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:53:59.501998 kubelet[2681]: E0516 03:53:59.501907 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:04.502878 kubelet[2681]: E0516 03:54:04.502791 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:09.502977 kubelet[2681]: E0516 03:54:09.502917 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:11.243434 kernel: hrtimer: interrupt took 1237872 ns May 16 03:54:13.046397 containerd[1487]: time="2025-05-16T03:54:13.046243145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"3ef03635e5d86b8a2fe1d4c5e69ab3f55fe7358db9fbf863b65f9f99ed45960c\" pid:5630 exited_at:{seconds:1747367653 nanos:45441311}" May 16 03:54:14.503282 kubelet[2681]: E0516 03:54:14.503211 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:19.504255 kubelet[2681]: E0516 03:54:19.504188 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:24.505252 kubelet[2681]: E0516 03:54:24.504964 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:29.507887 kubelet[2681]: E0516 03:54:29.507838 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:34.508850 kubelet[2681]: E0516 03:54:34.508577 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:39.509553 kubelet[2681]: E0516 03:54:39.509476 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:43.205271 containerd[1487]: time="2025-05-16T03:54:43.205172960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e630edace2659f20d2833eb103fb75650995ab662b04de22d0e7ec9403e110d\" id:\"2f63befcac80b7f39c2b256f36778e8d34ccdf62932785ef427d8da875e17122\" pid:5672 exited_at:{seconds:1747367683 nanos:203582037}" May 16 03:54:44.511437 kubelet[2681]: E0516 03:54:44.510377 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" May 16 03:54:49.510944 kubelet[2681]: E0516 03:54:49.510866 2681 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down"