May 14 01:31:15.080728 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 01:31:15.080758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:31:15.080769 kernel: BIOS-provided physical RAM map: May 14 01:31:15.080777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 01:31:15.080784 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 01:31:15.080794 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 01:31:15.080803 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 14 01:31:15.080811 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 14 01:31:15.080819 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 01:31:15.080826 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 01:31:15.080834 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 14 01:31:15.080842 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 01:31:15.080850 kernel: NX (Execute Disable) protection: active May 14 01:31:15.080858 kernel: APIC: Static calls initialized May 14 01:31:15.080869 kernel: SMBIOS 3.0.0 present. May 14 01:31:15.080877 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 14 01:31:15.080885 kernel: Hypervisor detected: KVM May 14 01:31:15.080893 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 01:31:15.080901 kernel: kvm-clock: using sched offset of 3693785591 cycles May 14 01:31:15.080909 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 01:31:15.080920 kernel: tsc: Detected 1996.249 MHz processor May 14 01:31:15.080929 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 01:31:15.080938 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 01:31:15.080946 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 14 01:31:15.080955 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 01:31:15.080963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 01:31:15.080972 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 14 01:31:15.080980 kernel: ACPI: Early table checksum verification disabled May 14 01:31:15.080990 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 14 01:31:15.080999 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:31:15.081007 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:31:15.081015 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:31:15.081023 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 14 01:31:15.081032 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:31:15.081040 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:31:15.081048 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 14 01:31:15.081057 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 14 01:31:15.081116 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 14 01:31:15.081125 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 14 01:31:15.081134 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 14 01:31:15.081145 kernel: No NUMA configuration found May 14 01:31:15.081154 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 14 01:31:15.081163 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 14 01:31:15.081171 kernel: Zone ranges: May 14 01:31:15.081182 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 01:31:15.081190 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 14 01:31:15.081199 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 14 01:31:15.081208 kernel: Movable zone start for each node May 14 01:31:15.081216 kernel: Early memory node ranges May 14 01:31:15.081225 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 01:31:15.081233 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 14 01:31:15.081242 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 14 01:31:15.081253 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 14 01:31:15.081262 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 01:31:15.081271 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 01:31:15.081280 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 14 01:31:15.081288 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 01:31:15.081297 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 01:31:15.081306 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 01:31:15.081315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 01:31:15.081323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 01:31:15.081334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 01:31:15.081343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 01:31:15.081351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 01:31:15.081360 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 01:31:15.081369 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 14 01:31:15.081378 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 01:31:15.081386 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 14 01:31:15.081395 kernel: Booting paravirtualized kernel on KVM May 14 01:31:15.081404 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 01:31:15.081415 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 01:31:15.081424 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 14 01:31:15.081433 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 14 01:31:15.081441 kernel: pcpu-alloc: [0] 0 1 May 14 01:31:15.081449 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 01:31:15.081460 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:31:15.081469 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 01:31:15.081478 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 01:31:15.081489 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 01:31:15.081498 kernel: Fallback order for Node 0: 0 May 14 01:31:15.081506 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 14 01:31:15.081515 kernel: Policy zone: Normal May 14 01:31:15.081524 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 01:31:15.081532 kernel: software IO TLB: area num 2. May 14 01:31:15.081541 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231404K reserved, 0K cma-reserved) May 14 01:31:15.081550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 01:31:15.081558 kernel: ftrace: allocating 37993 entries in 149 pages May 14 01:31:15.081570 kernel: ftrace: allocated 149 pages with 4 groups May 14 01:31:15.081578 kernel: Dynamic Preempt: voluntary May 14 01:31:15.081587 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 01:31:15.081599 kernel: rcu: RCU event tracing is enabled. May 14 01:31:15.081608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 01:31:15.081617 kernel: Trampoline variant of Tasks RCU enabled. May 14 01:31:15.081626 kernel: Rude variant of Tasks RCU enabled. May 14 01:31:15.081635 kernel: Tracing variant of Tasks RCU enabled. May 14 01:31:15.081644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 01:31:15.081655 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 01:31:15.081664 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 01:31:15.081672 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 01:31:15.081681 kernel: Console: colour VGA+ 80x25 May 14 01:31:15.081690 kernel: printk: console [tty0] enabled May 14 01:31:15.081698 kernel: printk: console [ttyS0] enabled May 14 01:31:15.081707 kernel: ACPI: Core revision 20230628 May 14 01:31:15.081716 kernel: APIC: Switch to symmetric I/O mode setup May 14 01:31:15.081725 kernel: x2apic enabled May 14 01:31:15.081736 kernel: APIC: Switched APIC routing to: physical x2apic May 14 01:31:15.081744 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 01:31:15.081753 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 01:31:15.081762 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 14 01:31:15.081771 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 01:31:15.081780 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 01:31:15.081788 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 01:31:15.081797 kernel: Spectre V2 : Mitigation: Retpolines May 14 01:31:15.081806 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 01:31:15.081816 kernel: Speculative Store Bypass: Vulnerable May 14 01:31:15.081825 kernel: x86/fpu: x87 FPU will use FXSAVE May 14 01:31:15.081834 kernel: Freeing SMP alternatives memory: 32K May 14 01:31:15.081843 kernel: pid_max: default: 32768 minimum: 301 May 14 01:31:15.081858 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 01:31:15.081870 kernel: landlock: Up and running. May 14 01:31:15.081879 kernel: SELinux: Initializing. May 14 01:31:15.081888 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 01:31:15.081898 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 01:31:15.081907 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 14 01:31:15.081917 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:31:15.081926 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:31:15.081938 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:31:15.081947 kernel: Performance Events: AMD PMU driver. May 14 01:31:15.081956 kernel: ... version: 0 May 14 01:31:15.081966 kernel: ... bit width: 48 May 14 01:31:15.081975 kernel: ... generic registers: 4 May 14 01:31:15.081986 kernel: ... value mask: 0000ffffffffffff May 14 01:31:15.081996 kernel: ... max period: 00007fffffffffff May 14 01:31:15.082004 kernel: ... fixed-purpose events: 0 May 14 01:31:15.082013 kernel: ... event mask: 000000000000000f May 14 01:31:15.082022 kernel: signal: max sigframe size: 1440 May 14 01:31:15.082031 kernel: rcu: Hierarchical SRCU implementation. May 14 01:31:15.082041 kernel: rcu: Max phase no-delay instances is 400. May 14 01:31:15.082050 kernel: smp: Bringing up secondary CPUs ... May 14 01:31:15.082500 kernel: smpboot: x86: Booting SMP configuration: May 14 01:31:15.082521 kernel: .... node #0, CPUs: #1 May 14 01:31:15.082530 kernel: smp: Brought up 1 node, 2 CPUs May 14 01:31:15.082539 kernel: smpboot: Max logical packages: 2 May 14 01:31:15.082549 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 14 01:31:15.082558 kernel: devtmpfs: initialized May 14 01:31:15.082567 kernel: x86/mm: Memory block size: 128MB May 14 01:31:15.082576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 01:31:15.082586 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 01:31:15.082595 kernel: pinctrl core: initialized pinctrl subsystem May 14 01:31:15.082607 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 01:31:15.082616 kernel: audit: initializing netlink subsys (disabled) May 14 01:31:15.082625 kernel: audit: type=2000 audit(1747186273.394:1): state=initialized audit_enabled=0 res=1 May 14 01:31:15.082634 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 01:31:15.082643 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 01:31:15.082653 kernel: cpuidle: using governor menu May 14 01:31:15.082662 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 01:31:15.082671 kernel: dca service started, version 1.12.1 May 14 01:31:15.082680 kernel: PCI: Using configuration type 1 for base access May 14 01:31:15.082692 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 01:31:15.082701 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 01:31:15.082710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 01:31:15.082719 kernel: ACPI: Added _OSI(Module Device) May 14 01:31:15.082728 kernel: ACPI: Added _OSI(Processor Device) May 14 01:31:15.082737 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 01:31:15.082746 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 01:31:15.082756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 01:31:15.082765 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 01:31:15.082776 kernel: ACPI: Interpreter enabled May 14 01:31:15.082785 kernel: ACPI: PM: (supports S0 S3 S5) May 14 01:31:15.082794 kernel: ACPI: Using IOAPIC for interrupt routing May 14 01:31:15.082803 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 01:31:15.082813 kernel: PCI: Using E820 reservations for host bridge windows May 14 01:31:15.082822 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 14 01:31:15.082831 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 01:31:15.082980 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 14 01:31:15.083872 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 14 01:31:15.083984 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 14 01:31:15.083999 kernel: acpiphp: Slot [3] registered May 14 01:31:15.084009 kernel: acpiphp: Slot [4] registered May 14 01:31:15.084019 kernel: acpiphp: Slot [5] registered May 14 01:31:15.084029 kernel: acpiphp: Slot [6] registered May 14 01:31:15.084039 kernel: acpiphp: Slot [7] registered May 14 01:31:15.084048 kernel: acpiphp: Slot [8] registered May 14 01:31:15.084098 kernel: acpiphp: Slot [9] registered May 14 01:31:15.084135 kernel: acpiphp: Slot [10] registered May 14 01:31:15.084145 kernel: acpiphp: Slot [11] registered May 14 01:31:15.084155 kernel: acpiphp: Slot [12] registered May 14 01:31:15.084164 kernel: acpiphp: Slot [13] registered May 14 01:31:15.084174 kernel: acpiphp: Slot [14] registered May 14 01:31:15.084183 kernel: acpiphp: Slot [15] registered May 14 01:31:15.084193 kernel: acpiphp: Slot [16] registered May 14 01:31:15.084202 kernel: acpiphp: Slot [17] registered May 14 01:31:15.084212 kernel: acpiphp: Slot [18] registered May 14 01:31:15.084224 kernel: acpiphp: Slot [19] registered May 14 01:31:15.084234 kernel: acpiphp: Slot [20] registered May 14 01:31:15.084243 kernel: acpiphp: Slot [21] registered May 14 01:31:15.084253 kernel: acpiphp: Slot [22] registered May 14 01:31:15.084263 kernel: acpiphp: Slot [23] registered May 14 01:31:15.084272 kernel: acpiphp: Slot [24] registered May 14 01:31:15.084295 kernel: acpiphp: Slot [25] registered May 14 01:31:15.084305 kernel: acpiphp: Slot [26] registered May 14 01:31:15.084314 kernel: acpiphp: Slot [27] registered May 14 01:31:15.084325 kernel: acpiphp: Slot [28] registered May 14 01:31:15.084337 kernel: acpiphp: Slot [29] registered May 14 01:31:15.084347 kernel: acpiphp: Slot [30] registered May 14 01:31:15.084357 kernel: acpiphp: Slot [31] registered May 14 01:31:15.084366 kernel: PCI host bridge to bus 0000:00 May 14 01:31:15.084490 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 01:31:15.084579 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 01:31:15.084663 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 01:31:15.084753 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 01:31:15.084837 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 14 01:31:15.084919 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 01:31:15.085036 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 14 01:31:15.086212 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 14 01:31:15.086321 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 14 01:31:15.086422 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 14 01:31:15.086527 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 14 01:31:15.086622 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 14 01:31:15.086716 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 14 01:31:15.086811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 14 01:31:15.086913 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 14 01:31:15.087010 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 14 01:31:15.089825 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 14 01:31:15.089934 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 14 01:31:15.090028 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 14 01:31:15.090142 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 14 01:31:15.090239 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 14 01:31:15.090332 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 14 01:31:15.090425 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 01:31:15.090534 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 14 01:31:15.090629 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 14 01:31:15.090724 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 14 01:31:15.090818 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 14 01:31:15.090912 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 14 01:31:15.091015 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 14 01:31:15.091373 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 14 01:31:15.091478 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 14 01:31:15.091574 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 14 01:31:15.091685 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 14 01:31:15.091783 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 14 01:31:15.091879 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 14 01:31:15.091985 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 14 01:31:15.093127 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 14 01:31:15.093237 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 14 01:31:15.093332 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 14 01:31:15.093347 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 01:31:15.093357 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 01:31:15.093367 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 01:31:15.093376 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 01:31:15.093385 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 14 01:31:15.093395 kernel: iommu: Default domain type: Translated May 14 01:31:15.093408 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 01:31:15.093417 kernel: PCI: Using ACPI for IRQ routing May 14 01:31:15.093427 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 01:31:15.093436 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 01:31:15.093445 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 14 01:31:15.093537 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 14 01:31:15.093629 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 14 01:31:15.093721 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 01:31:15.093735 kernel: vgaarb: loaded May 14 01:31:15.093748 kernel: clocksource: Switched to clocksource kvm-clock May 14 01:31:15.093757 kernel: VFS: Disk quotas dquot_6.6.0 May 14 01:31:15.093767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 01:31:15.093776 kernel: pnp: PnP ACPI init May 14 01:31:15.093876 kernel: pnp 00:03: [dma 2] May 14 01:31:15.093892 kernel: pnp: PnP ACPI: found 5 devices May 14 01:31:15.093902 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 01:31:15.093911 kernel: NET: Registered PF_INET protocol family May 14 01:31:15.093925 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 01:31:15.093934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 01:31:15.093943 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 01:31:15.093953 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 01:31:15.093962 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 01:31:15.093972 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 01:31:15.093981 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 01:31:15.093990 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 01:31:15.094000 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 01:31:15.094011 kernel: NET: Registered PF_XDP protocol family May 14 01:31:15.095688 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 01:31:15.095782 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 01:31:15.095864 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 01:31:15.095946 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 14 01:31:15.096029 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 14 01:31:15.096191 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 14 01:31:15.096308 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 01:31:15.096329 kernel: PCI: CLS 0 bytes, default 64 May 14 01:31:15.096338 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 14 01:31:15.096348 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 14 01:31:15.096358 kernel: Initialise system trusted keyrings May 14 01:31:15.096368 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 01:31:15.096377 kernel: Key type asymmetric registered May 14 01:31:15.096387 kernel: Asymmetric key parser 'x509' registered May 14 01:31:15.096396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 01:31:15.096406 kernel: io scheduler mq-deadline registered May 14 01:31:15.096418 kernel: io scheduler kyber registered May 14 01:31:15.096427 kernel: io scheduler bfq registered May 14 01:31:15.096436 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 01:31:15.096447 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 14 01:31:15.096456 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 14 01:31:15.096466 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 14 01:31:15.096475 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 14 01:31:15.096485 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 01:31:15.096494 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 01:31:15.096505 kernel: random: crng init done May 14 01:31:15.096515 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 01:31:15.096524 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 01:31:15.096533 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 01:31:15.096630 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 01:31:15.096645 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 01:31:15.096728 kernel: rtc_cmos 00:04: registered as rtc0 May 14 01:31:15.096813 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T01:31:14 UTC (1747186274) May 14 01:31:15.096903 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 01:31:15.096917 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 01:31:15.096927 kernel: NET: Registered PF_INET6 protocol family May 14 01:31:15.096936 kernel: Segment Routing with IPv6 May 14 01:31:15.096945 kernel: In-situ OAM (IOAM) with IPv6 May 14 01:31:15.096955 kernel: NET: Registered PF_PACKET protocol family May 14 01:31:15.096964 kernel: Key type dns_resolver registered May 14 01:31:15.096973 kernel: IPI shorthand broadcast: enabled May 14 01:31:15.096982 kernel: sched_clock: Marking stable (1043007757, 172610243)->(1237833907, -22215907) May 14 01:31:15.096995 kernel: registered taskstats version 1 May 14 01:31:15.097004 kernel: Loading compiled-in X.509 certificates May 14 01:31:15.097013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 01:31:15.097023 kernel: Key type .fscrypt registered May 14 01:31:15.097032 kernel: Key type fscrypt-provisioning registered May 14 01:31:15.097041 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 01:31:15.097051 kernel: ima: Allocated hash algorithm: sha1 May 14 01:31:15.097702 kernel: ima: No architecture policies found May 14 01:31:15.097721 kernel: clk: Disabling unused clocks May 14 01:31:15.097730 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 01:31:15.097740 kernel: Write protecting the kernel read-only data: 40960k May 14 01:31:15.097749 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 01:31:15.097759 kernel: Run /init as init process May 14 01:31:15.097768 kernel: with arguments: May 14 01:31:15.097777 kernel: /init May 14 01:31:15.097787 kernel: with environment: May 14 01:31:15.097796 kernel: HOME=/ May 14 01:31:15.097805 kernel: TERM=linux May 14 01:31:15.097816 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 01:31:15.097828 systemd[1]: Successfully made /usr/ read-only. May 14 01:31:15.097842 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:31:15.097852 systemd[1]: Detected virtualization kvm. May 14 01:31:15.097862 systemd[1]: Detected architecture x86-64. May 14 01:31:15.097872 systemd[1]: Running in initrd. May 14 01:31:15.097882 systemd[1]: No hostname configured, using default hostname. May 14 01:31:15.097894 systemd[1]: Hostname set to . May 14 01:31:15.097904 systemd[1]: Initializing machine ID from VM UUID. May 14 01:31:15.097914 systemd[1]: Queued start job for default target initrd.target. May 14 01:31:15.097924 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:31:15.097934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:31:15.097945 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 01:31:15.097963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:31:15.097975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 01:31:15.097986 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 01:31:15.097998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 01:31:15.098008 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 01:31:15.098018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:31:15.098031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:31:15.098041 systemd[1]: Reached target paths.target - Path Units. May 14 01:31:15.098051 systemd[1]: Reached target slices.target - Slice Units. May 14 01:31:15.098113 systemd[1]: Reached target swap.target - Swaps. May 14 01:31:15.098124 systemd[1]: Reached target timers.target - Timer Units. May 14 01:31:15.098134 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:31:15.098144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:31:15.098154 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 01:31:15.098164 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 01:31:15.098187 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:31:15.098199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:31:15.098209 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:31:15.098219 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:31:15.098229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 01:31:15.098239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:31:15.098249 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 01:31:15.098260 systemd[1]: Starting systemd-fsck-usr.service... May 14 01:31:15.098272 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:31:15.098282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:31:15.098292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:31:15.098303 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 01:31:15.098341 systemd-journald[184]: Collecting audit messages is disabled. May 14 01:31:15.098370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:31:15.098381 systemd[1]: Finished systemd-fsck-usr.service. May 14 01:31:15.098392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 01:31:15.098406 systemd-journald[184]: Journal started May 14 01:31:15.098430 systemd-journald[184]: Runtime Journal (/run/log/journal/20645771f0ea477aa99f18433c4291cd) is 8M, max 78.2M, 70.2M free. May 14 01:31:15.069813 systemd-modules-load[185]: Inserted module 'overlay' May 14 01:31:15.101110 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:31:15.110087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 01:31:15.112505 kernel: Bridge firewalling registered May 14 01:31:15.111822 systemd-modules-load[185]: Inserted module 'br_netfilter' May 14 01:31:15.146939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:31:15.155196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:31:15.160833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:31:15.164360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:31:15.169299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:31:15.172473 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 01:31:15.184465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:31:15.192145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:31:15.195352 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:31:15.204269 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 01:31:15.207404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:31:15.208970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:31:15.225192 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:31:15.234780 dracut-cmdline[217]: dracut-dracut-053 May 14 01:31:15.238343 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:31:15.276146 systemd-resolved[221]: Positive Trust Anchors: May 14 01:31:15.276159 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:31:15.276203 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:31:15.279695 systemd-resolved[221]: Defaulting to hostname 'linux'. May 14 01:31:15.280665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:31:15.283662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:31:15.325114 kernel: SCSI subsystem initialized May 14 01:31:15.336149 kernel: Loading iSCSI transport class v2.0-870. May 14 01:31:15.349137 kernel: iscsi: registered transport (tcp) May 14 01:31:15.371665 kernel: iscsi: registered transport (qla4xxx) May 14 01:31:15.371729 kernel: QLogic iSCSI HBA Driver May 14 01:31:15.433958 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 01:31:15.437428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 01:31:15.500417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 01:31:15.500525 kernel: device-mapper: uevent: version 1.0.3 May 14 01:31:15.503181 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 01:31:15.564188 kernel: raid6: sse2x4 gen() 5198 MB/s May 14 01:31:15.583173 kernel: raid6: sse2x2 gen() 5998 MB/s May 14 01:31:15.601517 kernel: raid6: sse2x1 gen() 9502 MB/s May 14 01:31:15.601603 kernel: raid6: using algorithm sse2x1 gen() 9502 MB/s May 14 01:31:15.620700 kernel: raid6: .... xor() 7261 MB/s, rmw enabled May 14 01:31:15.620771 kernel: raid6: using ssse3x2 recovery algorithm May 14 01:31:15.644754 kernel: xor: measuring software checksum speed May 14 01:31:15.644820 kernel: prefetch64-sse : 17286 MB/sec May 14 01:31:15.645306 kernel: generic_sse : 15696 MB/sec May 14 01:31:15.647477 kernel: xor: using function: prefetch64-sse (17286 MB/sec) May 14 01:31:15.821144 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 01:31:15.834703 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 01:31:15.838225 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:31:15.864908 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 14 01:31:15.869789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:31:15.877622 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 01:31:15.899005 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 14 01:31:15.946263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:31:15.950494 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:31:16.032780 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:31:16.041135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 01:31:16.084368 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 01:31:16.088769 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:31:16.090752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:31:16.092311 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:31:16.098774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 01:31:16.125894 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 01:31:16.136089 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 14 01:31:16.140463 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 14 01:31:16.154470 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:31:16.155361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:31:16.156757 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:31:16.158007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:31:16.158161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:31:16.159183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:31:16.162876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:31:16.171128 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 01:31:16.171152 kernel: GPT:17805311 != 20971519 May 14 01:31:16.171164 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 01:31:16.171183 kernel: GPT:17805311 != 20971519 May 14 01:31:16.171194 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 01:31:16.171206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 01:31:16.171217 kernel: libata version 3.00 loaded. May 14 01:31:16.171229 kernel: ata_piix 0000:00:01.1: version 2.13 May 14 01:31:16.163862 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:31:16.178970 kernel: scsi host0: ata_piix May 14 01:31:16.179207 kernel: scsi host1: ata_piix May 14 01:31:16.183863 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 14 01:31:16.183911 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 14 01:31:16.221108 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (462) May 14 01:31:16.229074 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) May 14 01:31:16.236443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 01:31:16.279677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:31:16.311508 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 01:31:16.321156 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 01:31:16.321902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 01:31:16.333994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 01:31:16.336623 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 01:31:16.349359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:31:16.364757 disk-uuid[509]: Primary Header is updated. May 14 01:31:16.364757 disk-uuid[509]: Secondary Entries is updated. May 14 01:31:16.364757 disk-uuid[509]: Secondary Header is updated. May 14 01:31:16.374398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 01:31:16.403977 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:31:17.387201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 01:31:17.390126 disk-uuid[510]: The operation has completed successfully. May 14 01:31:17.452658 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 01:31:17.454467 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 01:31:17.517707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 01:31:17.539047 sh[530]: Success May 14 01:31:17.557122 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 14 01:31:17.630210 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 01:31:17.633162 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 01:31:17.650885 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 01:31:17.669780 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 01:31:17.669822 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 01:31:17.674509 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 01:31:17.679388 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 01:31:17.685485 kernel: BTRFS info (device dm-0): using free space tree May 14 01:31:17.705344 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 01:31:17.707556 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 01:31:17.710134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 01:31:17.716609 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 01:31:17.773677 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:31:17.773771 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:31:17.773804 kernel: BTRFS info (device vda6): using free space tree May 14 01:31:17.782110 kernel: BTRFS info (device vda6): auto enabling async discard May 14 01:31:17.789094 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:31:17.801173 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 01:31:17.803721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 01:31:17.826930 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:31:17.831180 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:31:17.872080 systemd-networkd[710]: lo: Link UP May 14 01:31:17.872086 systemd-networkd[710]: lo: Gained carrier May 14 01:31:17.873352 systemd-networkd[710]: Enumeration completed May 14 01:31:17.873513 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:31:17.874299 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:31:17.874303 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:31:17.875027 systemd[1]: Reached target network.target - Network. May 14 01:31:17.875717 systemd-networkd[710]: eth0: Link UP May 14 01:31:17.875720 systemd-networkd[710]: eth0: Gained carrier May 14 01:31:17.875727 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:31:17.886109 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.47/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 14 01:31:17.964501 ignition[685]: Ignition 2.20.0 May 14 01:31:17.964521 ignition[685]: Stage: fetch-offline May 14 01:31:17.966435 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:31:17.964570 ignition[685]: no configs at "/usr/lib/ignition/base.d" May 14 01:31:17.964586 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:17.964739 ignition[685]: parsed url from cmdline: "" May 14 01:31:17.971203 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 01:31:17.964744 ignition[685]: no config URL provided May 14 01:31:17.964751 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" May 14 01:31:17.964761 ignition[685]: no config at "/usr/lib/ignition/user.ign" May 14 01:31:17.964766 ignition[685]: failed to fetch config: resource requires networking May 14 01:31:17.964966 ignition[685]: Ignition finished successfully May 14 01:31:17.998341 ignition[719]: Ignition 2.20.0 May 14 01:31:17.998363 ignition[719]: Stage: fetch May 14 01:31:17.998687 ignition[719]: no configs at "/usr/lib/ignition/base.d" May 14 01:31:17.998708 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:17.998885 ignition[719]: parsed url from cmdline: "" May 14 01:31:17.998893 ignition[719]: no config URL provided May 14 01:31:17.998903 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" May 14 01:31:17.998919 ignition[719]: no config at "/usr/lib/ignition/user.ign" May 14 01:31:18.001316 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 14 01:31:18.001357 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 14 01:31:18.001515 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 14 01:31:18.312977 ignition[719]: GET result: OK May 14 01:31:18.313222 ignition[719]: parsing config with SHA512: abe9100a374f10dff1352e9804a0205fe8e6debc30d161bc7d7e283e7c9acc2cf7e7df933bd471ff076994b96d95128dd08a89d0fa6ddc0f585d88aafcef6a7d May 14 01:31:18.324971 unknown[719]: fetched base config from "system" May 14 01:31:18.325018 unknown[719]: fetched base config from "system" May 14 01:31:18.325034 unknown[719]: fetched user config from "openstack" May 14 01:31:18.328633 ignition[719]: fetch: fetch complete May 14 01:31:18.328647 ignition[719]: fetch: fetch passed May 14 01:31:18.332057 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 01:31:18.328757 ignition[719]: Ignition finished successfully May 14 01:31:18.337382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 01:31:18.384055 ignition[726]: Ignition 2.20.0 May 14 01:31:18.384126 ignition[726]: Stage: kargs May 14 01:31:18.384559 ignition[726]: no configs at "/usr/lib/ignition/base.d" May 14 01:31:18.384586 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:18.389009 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 01:31:18.386854 ignition[726]: kargs: kargs passed May 14 01:31:18.386960 ignition[726]: Ignition finished successfully May 14 01:31:18.394295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 01:31:18.429912 ignition[732]: Ignition 2.20.0 May 14 01:31:18.429942 ignition[732]: Stage: disks May 14 01:31:18.432363 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 14 01:31:18.432417 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:18.439425 ignition[732]: disks: disks passed May 14 01:31:18.439680 ignition[732]: Ignition finished successfully May 14 01:31:18.441595 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 01:31:18.443901 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 01:31:18.445629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 01:31:18.448481 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:31:18.451278 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:31:18.453729 systemd[1]: Reached target basic.target - Basic System. May 14 01:31:18.458198 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 01:31:18.508842 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 01:31:18.523549 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 01:31:18.528315 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 01:31:18.698114 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 01:31:18.699411 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 01:31:18.700976 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 01:31:18.704307 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:31:18.708149 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 01:31:18.709619 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 01:31:18.712212 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 14 01:31:18.714200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 01:31:18.715212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:31:18.722730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 01:31:18.731215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 01:31:18.741100 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (748) May 14 01:31:18.749102 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:31:18.767655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:31:18.767742 kernel: BTRFS info (device vda6): using free space tree May 14 01:31:18.783125 kernel: BTRFS info (device vda6): auto enabling async discard May 14 01:31:18.787998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:31:18.848095 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory May 14 01:31:18.856810 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory May 14 01:31:18.866621 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory May 14 01:31:18.875809 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory May 14 01:31:18.974306 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 01:31:18.976332 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 01:31:18.979213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 01:31:18.995921 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 01:31:19.003101 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:31:19.026763 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 01:31:19.034100 ignition[865]: INFO : Ignition 2.20.0 May 14 01:31:19.034100 ignition[865]: INFO : Stage: mount May 14 01:31:19.035987 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:31:19.035987 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:19.035987 ignition[865]: INFO : mount: mount passed May 14 01:31:19.035987 ignition[865]: INFO : Ignition finished successfully May 14 01:31:19.036375 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 01:31:19.111284 systemd-networkd[710]: eth0: Gained IPv6LL May 14 01:31:25.928299 coreos-metadata[750]: May 14 01:31:25.928 WARN failed to locate config-drive, using the metadata service API instead May 14 01:31:25.970336 coreos-metadata[750]: May 14 01:31:25.970 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 14 01:31:25.987015 coreos-metadata[750]: May 14 01:31:25.986 INFO Fetch successful May 14 01:31:25.988561 coreos-metadata[750]: May 14 01:31:25.987 INFO wrote hostname ci-4284-0-0-n-af44d751a9.novalocal to /sysroot/etc/hostname May 14 01:31:25.991408 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 14 01:31:25.991706 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 14 01:31:25.999362 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 01:31:26.035996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:31:26.072205 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) May 14 01:31:26.080489 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:31:26.080564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:31:26.084588 kernel: BTRFS info (device vda6): using free space tree May 14 01:31:26.096206 kernel: BTRFS info (device vda6): auto enabling async discard May 14 01:31:26.101671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:31:26.148806 ignition[899]: INFO : Ignition 2.20.0 May 14 01:31:26.148806 ignition[899]: INFO : Stage: files May 14 01:31:26.151672 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:31:26.151672 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:26.151672 ignition[899]: DEBUG : files: compiled without relabeling support, skipping May 14 01:31:26.157400 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 01:31:26.157400 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 01:31:26.161377 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 01:31:26.161377 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 01:31:26.165333 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 01:31:26.161498 unknown[899]: wrote ssh authorized keys file for user: core May 14 01:31:26.169667 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 01:31:26.169667 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 01:31:26.231416 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 01:31:26.530820 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:31:26.532472 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 01:31:26.547197 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 01:31:27.233752 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 01:31:28.722589 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 01:31:28.722589 ignition[899]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 01:31:28.727267 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 01:31:28.727267 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 01:31:28.727267 ignition[899]: INFO : files: files passed May 14 01:31:28.727267 ignition[899]: INFO : Ignition finished successfully May 14 01:31:28.727737 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 01:31:28.737308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 01:31:28.744280 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 01:31:28.754782 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 01:31:28.755013 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 01:31:28.759637 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:31:28.759637 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 01:31:28.762801 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:31:28.766855 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:31:28.769510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 01:31:28.773196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 01:31:28.818835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 01:31:28.819039 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 01:31:28.821403 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 01:31:28.832646 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 01:31:28.834626 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 01:31:28.838543 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 01:31:28.883227 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:31:28.888756 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 01:31:28.928402 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 01:31:28.930119 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:31:28.933352 systemd[1]: Stopped target timers.target - Timer Units. May 14 01:31:28.936287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 01:31:28.936598 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:31:28.939816 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 01:31:28.941845 systemd[1]: Stopped target basic.target - Basic System. May 14 01:31:28.944825 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 01:31:28.947431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:31:28.950155 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 01:31:28.952929 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 01:31:28.955832 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:31:28.958906 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 01:31:28.961758 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 01:31:28.964706 systemd[1]: Stopped target swap.target - Swaps. May 14 01:31:28.967412 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 01:31:28.967698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 01:31:28.970857 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 01:31:28.972786 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:31:28.975271 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 01:31:28.977782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:31:28.980125 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 01:31:28.980522 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 01:31:28.983857 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 01:31:28.984220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:31:28.985952 systemd[1]: ignition-files.service: Deactivated successfully. May 14 01:31:28.986269 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 01:31:28.992546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 01:31:28.994708 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 01:31:28.995217 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:31:28.999574 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 01:31:29.002231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 01:31:29.002649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:31:29.009339 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 01:31:29.011038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:31:29.023922 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 01:31:29.024009 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 01:31:29.036184 ignition[954]: INFO : Ignition 2.20.0 May 14 01:31:29.038339 ignition[954]: INFO : Stage: umount May 14 01:31:29.038339 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:31:29.038339 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 01:31:29.038339 ignition[954]: INFO : umount: umount passed May 14 01:31:29.038339 ignition[954]: INFO : Ignition finished successfully May 14 01:31:29.040373 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 01:31:29.041110 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 01:31:29.042794 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 01:31:29.042870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 01:31:29.043474 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 01:31:29.043518 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 01:31:29.046288 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 01:31:29.046331 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 01:31:29.047488 systemd[1]: Stopped target network.target - Network. May 14 01:31:29.048784 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 01:31:29.048834 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:31:29.049901 systemd[1]: Stopped target paths.target - Path Units. May 14 01:31:29.051008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 01:31:29.053041 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:31:29.053633 systemd[1]: Stopped target slices.target - Slice Units. May 14 01:31:29.054637 systemd[1]: Stopped target sockets.target - Socket Units. May 14 01:31:29.055760 systemd[1]: iscsid.socket: Deactivated successfully. May 14 01:31:29.055798 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:31:29.057275 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 01:31:29.057309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:31:29.058261 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 01:31:29.058305 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 01:31:29.059374 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 01:31:29.059415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 01:31:29.060879 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 01:31:29.062010 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 01:31:29.065247 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 01:31:29.067149 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 01:31:29.067253 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 01:31:29.070849 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 01:31:29.072055 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 01:31:29.072397 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:31:29.074987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 01:31:29.075233 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 01:31:29.075335 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 01:31:29.077384 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 01:31:29.077766 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 01:31:29.077959 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 01:31:29.080136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 01:31:29.084953 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 01:31:29.085022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:31:29.086770 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 01:31:29.086818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 01:31:29.088178 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 01:31:29.088221 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 01:31:29.088952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:31:29.092096 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 01:31:29.102095 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 01:31:29.102224 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 01:31:29.103448 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 01:31:29.103583 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:31:29.105459 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 01:31:29.105510 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 01:31:29.106262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 01:31:29.106295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:31:29.107443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 01:31:29.107488 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 01:31:29.109131 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 01:31:29.109175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 01:31:29.110296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:31:29.110340 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:31:29.114176 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 01:31:29.114939 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 01:31:29.114992 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:31:29.116368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:31:29.116413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:31:29.130791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 01:31:29.130902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 01:31:29.344708 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 01:31:29.344936 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 01:31:29.348636 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 01:31:29.350423 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 01:31:29.350551 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 01:31:29.356342 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 01:31:29.391337 systemd[1]: Switching root. May 14 01:31:29.428886 systemd-journald[184]: Journal stopped May 14 01:31:30.993586 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 14 01:31:30.993647 kernel: SELinux: policy capability network_peer_controls=1 May 14 01:31:30.993667 kernel: SELinux: policy capability open_perms=1 May 14 01:31:30.993683 kernel: SELinux: policy capability extended_socket_class=1 May 14 01:31:30.993699 kernel: SELinux: policy capability always_check_network=0 May 14 01:31:30.993710 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 01:31:30.993723 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 01:31:30.993734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 01:31:30.993746 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 01:31:30.993759 kernel: audit: type=1403 audit(1747186289.848:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 01:31:30.993777 systemd[1]: Successfully loaded SELinux policy in 81.356ms. May 14 01:31:30.993795 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.005ms. May 14 01:31:30.993809 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:31:30.993822 systemd[1]: Detected virtualization kvm. May 14 01:31:30.993835 systemd[1]: Detected architecture x86-64. May 14 01:31:30.993847 systemd[1]: Detected first boot. May 14 01:31:30.993859 systemd[1]: Hostname set to . May 14 01:31:30.993872 systemd[1]: Initializing machine ID from VM UUID. May 14 01:31:30.993884 zram_generator::config[1000]: No configuration found. May 14 01:31:30.993900 kernel: Guest personality initialized and is inactive May 14 01:31:30.993912 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 01:31:30.993923 kernel: Initialized host personality May 14 01:31:30.993935 kernel: NET: Registered PF_VSOCK protocol family May 14 01:31:30.993947 systemd[1]: Populated /etc with preset unit settings. May 14 01:31:30.993959 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 01:31:30.993972 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 01:31:30.993984 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 01:31:30.993998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 01:31:30.994011 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 01:31:30.994023 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 01:31:30.994036 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 01:31:30.994049 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 01:31:30.995022 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 01:31:30.995042 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 01:31:30.995055 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 01:31:30.995132 systemd[1]: Created slice user.slice - User and Session Slice. May 14 01:31:30.995152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:31:30.995165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:31:30.995177 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 01:31:30.995189 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 01:31:30.995202 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 01:31:30.995214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:31:30.995228 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 01:31:30.995241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:31:30.995253 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 01:31:30.995265 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 01:31:30.995278 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 01:31:30.995290 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 01:31:30.995302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:31:30.995314 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:31:30.995326 systemd[1]: Reached target slices.target - Slice Units. May 14 01:31:30.995340 systemd[1]: Reached target swap.target - Swaps. May 14 01:31:30.995351 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 01:31:30.995364 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 01:31:30.995376 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 01:31:30.995388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:31:30.995401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:31:30.995412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:31:30.995425 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 01:31:30.995437 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 01:31:30.995452 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 01:31:30.995464 systemd[1]: Mounting media.mount - External Media Directory... May 14 01:31:30.995476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:31:30.995488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 01:31:30.995500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 01:31:30.995512 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 01:31:30.995526 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 01:31:30.995538 systemd[1]: Reached target machines.target - Containers. May 14 01:31:30.995554 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 01:31:30.995566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:31:30.995579 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:31:30.995591 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 01:31:30.995603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:31:30.995616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:31:30.995628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:31:30.995641 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 01:31:30.995653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:31:30.995669 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 01:31:30.995682 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 01:31:30.995694 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 01:31:30.995707 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 01:31:30.995719 systemd[1]: Stopped systemd-fsck-usr.service. May 14 01:31:30.995732 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:31:30.995745 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:31:30.995757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:31:30.995771 kernel: fuse: init (API version 7.39) May 14 01:31:30.995783 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 01:31:30.995795 kernel: loop: module loaded May 14 01:31:30.995808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 01:31:30.995821 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 01:31:30.995833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:31:30.995845 systemd[1]: verity-setup.service: Deactivated successfully. May 14 01:31:30.995857 systemd[1]: Stopped verity-setup.service. May 14 01:31:30.995872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:31:30.995885 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 01:31:30.995897 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 01:31:30.995910 systemd[1]: Mounted media.mount - External Media Directory. May 14 01:31:30.995923 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 01:31:30.995937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 01:31:30.995950 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 01:31:30.995962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:31:30.995974 kernel: ACPI: bus type drm_connector registered May 14 01:31:30.995986 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 01:31:30.995998 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 01:31:30.996012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:31:30.996025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:31:30.996038 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:31:30.996050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:31:30.996078 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 01:31:30.996091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:31:30.996104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:31:30.996116 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 01:31:30.996128 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 01:31:30.996143 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:31:30.996155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:31:30.996168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:31:30.996198 systemd-journald[1097]: Collecting audit messages is disabled. May 14 01:31:30.996226 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 01:31:30.996255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 01:31:30.996269 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 01:31:30.996282 systemd-journald[1097]: Journal started May 14 01:31:30.996314 systemd-journald[1097]: Runtime Journal (/run/log/journal/20645771f0ea477aa99f18433c4291cd) is 8M, max 78.2M, 70.2M free. May 14 01:31:30.585256 systemd[1]: Queued start job for default target multi-user.target. May 14 01:31:30.598278 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 01:31:30.598781 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 01:31:30.999165 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:31:31.014559 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 01:31:31.019151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 01:31:31.024154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 01:31:31.024745 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 01:31:31.024788 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:31:31.026489 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 01:31:31.030027 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 01:31:31.033934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 01:31:31.034587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:31:31.037203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 01:31:31.039287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 01:31:31.039902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:31:31.041483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 01:31:31.043815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:31:31.048052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:31:31.050570 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 01:31:31.054184 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 01:31:31.055790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:31:31.058343 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 01:31:31.061452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 01:31:31.062397 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 01:31:31.070343 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 01:31:31.073924 systemd-journald[1097]: Time spent on flushing to /var/log/journal/20645771f0ea477aa99f18433c4291cd is 69.755ms for 957 entries. May 14 01:31:31.073924 systemd-journald[1097]: System Journal (/var/log/journal/20645771f0ea477aa99f18433c4291cd) is 8M, max 584.8M, 576.8M free. May 14 01:31:31.162018 systemd-journald[1097]: Received client request to flush runtime journal. May 14 01:31:31.162104 kernel: loop0: detected capacity change from 0 to 8 May 14 01:31:31.162131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 01:31:31.086023 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 01:31:31.089602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 01:31:31.092841 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 01:31:31.128745 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 01:31:31.129886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:31:31.164193 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 01:31:31.178098 kernel: loop1: detected capacity change from 0 to 109808 May 14 01:31:31.221574 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 01:31:31.249914 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 01:31:31.255330 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:31:31.260427 kernel: loop2: detected capacity change from 0 to 205544 May 14 01:31:31.309429 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. May 14 01:31:31.309450 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. May 14 01:31:31.320131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:31:31.337121 kernel: loop3: detected capacity change from 0 to 151640 May 14 01:31:31.398607 kernel: loop4: detected capacity change from 0 to 8 May 14 01:31:31.404164 kernel: loop5: detected capacity change from 0 to 109808 May 14 01:31:31.438099 kernel: loop6: detected capacity change from 0 to 205544 May 14 01:31:31.498104 kernel: loop7: detected capacity change from 0 to 151640 May 14 01:31:31.555237 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 14 01:31:31.556481 (sd-merge)[1165]: Merged extensions into '/usr'. May 14 01:31:31.569829 systemd[1]: Reload requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... May 14 01:31:31.569851 systemd[1]: Reloading... May 14 01:31:31.682110 zram_generator::config[1193]: No configuration found. May 14 01:31:31.953215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:31:32.039206 ldconfig[1134]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 01:31:32.042395 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 01:31:32.043205 systemd[1]: Reloading finished in 472 ms. May 14 01:31:32.064905 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 01:31:32.065888 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 01:31:32.066747 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 01:31:32.077451 systemd[1]: Starting ensure-sysext.service... May 14 01:31:32.081352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:31:32.085438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:31:32.105285 systemd[1]: Reload requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... May 14 01:31:32.105408 systemd[1]: Reloading... May 14 01:31:32.130262 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 01:31:32.130513 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 01:31:32.131690 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 01:31:32.132040 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 14 01:31:32.133981 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 14 01:31:32.139666 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:31:32.139774 systemd-tmpfiles[1251]: Skipping /boot May 14 01:31:32.151875 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:31:32.152713 systemd-udevd[1252]: Using default interface naming scheme 'v255'. May 14 01:31:32.154157 systemd-tmpfiles[1251]: Skipping /boot May 14 01:31:32.219099 zram_generator::config[1284]: No configuration found. May 14 01:31:32.264095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1288) May 14 01:31:32.400090 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 01:31:32.410083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 01:31:32.434108 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 14 01:31:32.490098 kernel: ACPI: button: Power Button [PWRF] May 14 01:31:32.499099 kernel: mousedev: PS/2 mouse device common for all mice May 14 01:31:32.497998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:31:32.529129 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 14 01:31:32.529218 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 14 01:31:32.533835 kernel: Console: switching to colour dummy device 80x25 May 14 01:31:32.533923 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 01:31:32.533942 kernel: [drm] features: -context_init May 14 01:31:32.533961 kernel: [drm] number of scanouts: 1 May 14 01:31:32.533987 kernel: [drm] number of cap sets: 0 May 14 01:31:32.539110 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 14 01:31:32.542203 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 14 01:31:32.542290 kernel: Console: switching to colour frame buffer device 160x50 May 14 01:31:32.557109 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 01:31:32.617155 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 01:31:32.620033 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 01:31:32.620429 systemd[1]: Reloading finished in 514 ms. May 14 01:31:32.634625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:31:32.640984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:31:32.694150 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 01:31:32.700082 systemd[1]: Finished ensure-sysext.service. May 14 01:31:32.703724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:31:32.706432 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:31:32.720185 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 01:31:32.720775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:31:32.725326 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 01:31:32.729012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:31:32.739238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:31:32.745337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:31:32.754474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:31:32.758564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:31:32.777262 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 01:31:32.777691 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:31:32.784267 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 01:31:32.787796 lvm[1373]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:31:32.791499 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:31:32.802660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:31:32.811184 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 01:31:32.815146 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 01:31:32.825015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:31:32.825303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:31:32.827043 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 01:31:32.828652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:31:32.828918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:31:32.831113 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:31:32.832305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:31:32.833412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:31:32.833923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:31:32.836957 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:31:32.842310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:31:32.857310 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 01:31:32.861666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 01:31:32.874309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:31:32.878711 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 01:31:32.884775 augenrules[1412]: No rules May 14 01:31:32.882955 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:31:32.883246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:31:32.886106 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 01:31:32.902461 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:31:32.895311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 01:31:32.898261 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:31:32.898555 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:31:32.905148 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 01:31:32.910979 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 01:31:32.931801 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 01:31:32.955022 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 01:31:32.960434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 01:31:32.969328 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 01:31:32.982430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:31:33.053895 systemd-networkd[1387]: lo: Link UP May 14 01:31:33.053904 systemd-networkd[1387]: lo: Gained carrier May 14 01:31:33.055199 systemd-networkd[1387]: Enumeration completed May 14 01:31:33.055296 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:31:33.055538 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:31:33.055543 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:31:33.056702 systemd-networkd[1387]: eth0: Link UP May 14 01:31:33.056706 systemd-networkd[1387]: eth0: Gained carrier May 14 01:31:33.056719 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:31:33.061198 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 01:31:33.065854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 01:31:33.070178 systemd-networkd[1387]: eth0: DHCPv4 address 172.24.4.47/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 14 01:31:33.099131 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 01:31:33.105375 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 01:31:33.106138 systemd[1]: Reached target time-set.target - System Time Set. May 14 01:31:33.114972 systemd-resolved[1388]: Positive Trust Anchors: May 14 01:31:33.114996 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:31:33.115043 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:31:33.121153 systemd-resolved[1388]: Using system hostname 'ci-4284-0-0-n-af44d751a9.novalocal'. May 14 01:31:33.122862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:31:33.124016 systemd[1]: Reached target network.target - Network. May 14 01:31:33.124549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:31:33.124990 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:31:33.127511 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 01:31:33.128045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 01:31:33.128763 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 01:31:33.131083 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 01:31:33.133678 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 01:31:33.136388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 01:31:33.136618 systemd[1]: Reached target paths.target - Path Units. May 14 01:31:33.139319 systemd[1]: Reached target timers.target - Timer Units. May 14 01:31:33.143316 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 01:31:33.147856 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 01:31:33.154309 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 01:31:33.160332 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 01:31:33.162180 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 01:31:33.170819 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 01:31:33.172030 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 01:31:33.174903 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 01:31:33.176918 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:31:33.177462 systemd[1]: Reached target basic.target - Basic System. May 14 01:31:33.178049 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 01:31:33.180157 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 01:31:33.181623 systemd[1]: Starting containerd.service - containerd container runtime... May 14 01:31:33.187481 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 01:31:33.192215 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 01:31:33.199287 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 01:31:33.210585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 01:31:33.216683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 01:31:33.225750 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 01:31:33.233242 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 01:31:33.234328 jq[1448]: false May 14 01:31:33.237254 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 01:31:33.237273 systemd-timesyncd[1391]: Contacted time server 23.141.40.124:123 (0.flatcar.pool.ntp.org). May 14 01:31:33.237335 systemd-timesyncd[1391]: Initial clock synchronization to Wed 2025-05-14 01:31:33.031865 UTC. May 14 01:31:33.245946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 01:31:33.256280 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 01:31:33.259174 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 01:31:33.260281 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 01:31:33.263977 systemd[1]: Starting update-engine.service - Update Engine... May 14 01:31:33.273969 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 01:31:33.280759 dbus-daemon[1447]: [system] SELinux support is enabled May 14 01:31:33.285285 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 01:31:33.295337 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 01:31:33.295573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 01:31:33.307148 jq[1459]: true May 14 01:31:33.307606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 01:31:33.307855 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 01:31:33.324713 extend-filesystems[1449]: Found loop4 May 14 01:31:33.338923 extend-filesystems[1449]: Found loop5 May 14 01:31:33.338923 extend-filesystems[1449]: Found loop6 May 14 01:31:33.338923 extend-filesystems[1449]: Found loop7 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda May 14 01:31:33.338923 extend-filesystems[1449]: Found vda1 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda2 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda3 May 14 01:31:33.338923 extend-filesystems[1449]: Found usr May 14 01:31:33.338923 extend-filesystems[1449]: Found vda4 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda6 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda7 May 14 01:31:33.338923 extend-filesystems[1449]: Found vda9 May 14 01:31:33.338923 extend-filesystems[1449]: Checking size of /dev/vda9 May 14 01:31:33.475844 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 14 01:31:33.475908 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 14 01:31:33.475942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1305) May 14 01:31:33.476804 extend-filesystems[1449]: Resized partition /dev/vda9 May 14 01:31:33.342714 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 01:31:33.482346 update_engine[1458]: I20250514 01:31:33.339753 1458 main.cc:92] Flatcar Update Engine starting May 14 01:31:33.482346 update_engine[1458]: I20250514 01:31:33.363906 1458 update_check_scheduler.cc:74] Next update check in 7m56s May 14 01:31:33.482663 tar[1466]: linux-amd64/helm May 14 01:31:33.488176 extend-filesystems[1486]: resize2fs 1.47.2 (1-Jan-2025) May 14 01:31:33.488176 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 01:31:33.488176 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 01:31:33.488176 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 14 01:31:33.354155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 01:31:33.517428 extend-filesystems[1449]: Resized filesystem in /dev/vda9 May 14 01:31:33.354187 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 01:31:33.525183 jq[1471]: true May 14 01:31:33.354720 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 01:31:33.354737 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 01:31:33.363484 systemd[1]: motdgen.service: Deactivated successfully. May 14 01:31:33.363696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 01:31:33.377496 systemd[1]: Started update-engine.service - Update Engine. May 14 01:31:33.406132 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 01:31:33.419972 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 01:31:33.462114 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 01:31:33.462359 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 01:31:33.535491 systemd-logind[1457]: New seat seat0. May 14 01:31:33.579682 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (Power Button) May 14 01:31:33.581142 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 01:31:33.581382 systemd[1]: Started systemd-logind.service - User Login Management. May 14 01:31:33.588750 bash[1505]: Updated "/home/core/.ssh/authorized_keys" May 14 01:31:33.591106 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 01:31:33.597302 systemd[1]: Starting sshkeys.service... May 14 01:31:33.660642 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 01:31:33.670528 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 01:31:33.771331 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 01:31:33.808704 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 01:31:33.834522 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 01:31:33.846356 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 01:31:33.853669 systemd[1]: Started sshd@0-172.24.4.47:22-172.24.4.1:49796.service - OpenSSH per-connection server daemon (172.24.4.1:49796). May 14 01:31:33.890606 systemd[1]: issuegen.service: Deactivated successfully. May 14 01:31:33.890815 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 01:31:33.902876 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 01:31:33.913215 containerd[1483]: time="2025-05-14T01:31:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 01:31:33.914106 containerd[1483]: time="2025-05-14T01:31:33.914037141Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 01:31:33.940309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 01:31:33.945604 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 01:31:33.953689 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 01:31:33.954659 systemd[1]: Reached target getty.target - Login Prompts. May 14 01:31:33.965007 containerd[1483]: time="2025-05-14T01:31:33.964815596Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.223µs" May 14 01:31:33.965007 containerd[1483]: time="2025-05-14T01:31:33.964862765Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 01:31:33.965007 containerd[1483]: time="2025-05-14T01:31:33.964884906Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 01:31:33.965193 containerd[1483]: time="2025-05-14T01:31:33.965055166Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 01:31:33.965193 containerd[1483]: time="2025-05-14T01:31:33.965097996Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 01:31:33.965193 containerd[1483]: time="2025-05-14T01:31:33.965127221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:31:33.965269 containerd[1483]: time="2025-05-14T01:31:33.965191812Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:31:33.965269 containerd[1483]: time="2025-05-14T01:31:33.965207962Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965433405Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965456248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965468391Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965478169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965554101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965748606Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965779594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:31:33.965792 containerd[1483]: time="2025-05-14T01:31:33.965798299Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 01:31:33.966003 containerd[1483]: time="2025-05-14T01:31:33.965833415Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 01:31:33.966726 containerd[1483]: time="2025-05-14T01:31:33.966488113Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 01:31:33.966726 containerd[1483]: time="2025-05-14T01:31:33.966558936Z" level=info msg="metadata content store policy set" policy=shared May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973567941Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973614509Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973631470Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973646939Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973659343Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973670724Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973683408Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973696723Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973707864Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973718774Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973735786Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973751646Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973853647Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 01:31:33.975091 containerd[1483]: time="2025-05-14T01:31:33.973875879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973888853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973901216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973912878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973923668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973935631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973946551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973958504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973970907Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.973987819Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.974046359Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.974090251Z" level=info msg="Start snapshots syncer" May 14 01:31:33.975436 containerd[1483]: time="2025-05-14T01:31:33.974122592Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 01:31:33.975722 containerd[1483]: time="2025-05-14T01:31:33.974373392Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 01:31:33.975722 containerd[1483]: time="2025-05-14T01:31:33.974433435Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974499058Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974592262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974622890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974641094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974654569Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974668405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974682492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974695536Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974722336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974737224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974750659Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974789232Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974804691Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:31:33.975879 containerd[1483]: time="2025-05-14T01:31:33.974815641Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974826822Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974836881Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974848553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974860035Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974880603Z" level=info msg="runtime interface created" May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974889830Z" level=info msg="created NRI interface" May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974899509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974911712Z" level=info msg="Connect containerd service" May 14 01:31:33.976199 containerd[1483]: time="2025-05-14T01:31:33.974939003Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 01:31:33.976960 containerd[1483]: time="2025-05-14T01:31:33.976938182Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 01:31:34.135602 containerd[1483]: time="2025-05-14T01:31:34.135503060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 01:31:34.135854 containerd[1483]: time="2025-05-14T01:31:34.135809012Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 01:31:34.135969 containerd[1483]: time="2025-05-14T01:31:34.135732607Z" level=info msg="Start subscribing containerd event" May 14 01:31:34.138908 containerd[1483]: time="2025-05-14T01:31:34.138877962Z" level=info msg="Start recovering state" May 14 01:31:34.139730 containerd[1483]: time="2025-05-14T01:31:34.139713673Z" level=info msg="Start event monitor" May 14 01:31:34.139810 containerd[1483]: time="2025-05-14T01:31:34.139796775Z" level=info msg="Start cni network conf syncer for default" May 14 01:31:34.139872 containerd[1483]: time="2025-05-14T01:31:34.139849966Z" level=info msg="Start streaming server" May 14 01:31:34.139943 containerd[1483]: time="2025-05-14T01:31:34.139931252Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 01:31:34.140026 containerd[1483]: time="2025-05-14T01:31:34.140012323Z" level=info msg="runtime interface starting up..." May 14 01:31:34.140107 containerd[1483]: time="2025-05-14T01:31:34.140089987Z" level=info msg="starting plugins..." May 14 01:31:34.140178 containerd[1483]: time="2025-05-14T01:31:34.140165270Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 01:31:34.141002 systemd[1]: Started containerd.service - containerd container runtime. May 14 01:31:34.147337 containerd[1483]: time="2025-05-14T01:31:34.147297910Z" level=info msg="containerd successfully booted in 0.234512s" May 14 01:31:34.236383 tar[1466]: linux-amd64/LICENSE May 14 01:31:34.237625 tar[1466]: linux-amd64/README.md May 14 01:31:34.253734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 01:31:34.407325 systemd-networkd[1387]: eth0: Gained IPv6LL May 14 01:31:34.410846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 01:31:34.417398 systemd[1]: Reached target network-online.target - Network is Online. May 14 01:31:34.429699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:31:34.438666 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 01:31:34.505354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 01:31:34.881513 sshd[1531]: Accepted publickey for core from 172.24.4.1 port 49796 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:34.912113 sshd-session[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:34.947156 systemd-logind[1457]: New session 1 of user core. May 14 01:31:34.951632 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 01:31:34.957591 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 01:31:35.003184 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 01:31:35.014693 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 01:31:35.054666 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 01:31:35.061667 systemd-logind[1457]: New session c1 of user core. May 14 01:31:35.447102 systemd[1573]: Queued start job for default target default.target. May 14 01:31:35.451373 systemd[1573]: Created slice app.slice - User Application Slice. May 14 01:31:35.451644 systemd[1573]: Reached target paths.target - Paths. May 14 01:31:35.451755 systemd[1573]: Reached target timers.target - Timers. May 14 01:31:35.453181 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 01:31:35.464235 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 01:31:35.464475 systemd[1573]: Reached target sockets.target - Sockets. May 14 01:31:35.464587 systemd[1573]: Reached target basic.target - Basic System. May 14 01:31:35.464625 systemd[1573]: Reached target default.target - Main User Target. May 14 01:31:35.464649 systemd[1573]: Startup finished in 389ms. May 14 01:31:35.465403 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 01:31:35.481666 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 01:31:35.967756 systemd[1]: Started sshd@1-172.24.4.47:22-172.24.4.1:60242.service - OpenSSH per-connection server daemon (172.24.4.1:60242). May 14 01:31:36.862814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:31:36.883014 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:31:37.511585 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 60242 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:37.513607 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:37.526009 systemd-logind[1457]: New session 2 of user core. May 14 01:31:37.535347 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 01:31:38.152943 sshd[1598]: Connection closed by 172.24.4.1 port 60242 May 14 01:31:38.155679 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 14 01:31:38.172445 systemd[1]: sshd@1-172.24.4.47:22-172.24.4.1:60242.service: Deactivated successfully. May 14 01:31:38.176253 systemd[1]: session-2.scope: Deactivated successfully. May 14 01:31:38.179516 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. May 14 01:31:38.185236 systemd[1]: Started sshd@2-172.24.4.47:22-172.24.4.1:60248.service - OpenSSH per-connection server daemon (172.24.4.1:60248). May 14 01:31:38.194165 systemd-logind[1457]: Removed session 2. May 14 01:31:39.028906 login[1541]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 01:31:39.049869 systemd-logind[1457]: New session 3 of user core. May 14 01:31:39.053440 login[1540]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 01:31:39.055307 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 01:31:39.067762 systemd-logind[1457]: New session 4 of user core. May 14 01:31:39.075605 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 01:31:39.123193 kubelet[1593]: E0514 01:31:39.123142 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:31:39.126920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:31:39.127134 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:31:39.127681 systemd[1]: kubelet.service: Consumed 2.231s CPU time, 238M memory peak. May 14 01:31:39.367216 sshd[1603]: Accepted publickey for core from 172.24.4.1 port 60248 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:39.369737 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:39.380449 systemd-logind[1457]: New session 5 of user core. May 14 01:31:39.394673 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 01:31:40.006988 sshd[1633]: Connection closed by 172.24.4.1 port 60248 May 14 01:31:40.008238 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 14 01:31:40.016165 systemd[1]: sshd@2-172.24.4.47:22-172.24.4.1:60248.service: Deactivated successfully. May 14 01:31:40.020585 systemd[1]: session-5.scope: Deactivated successfully. May 14 01:31:40.022506 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. May 14 01:31:40.025757 systemd-logind[1457]: Removed session 5. May 14 01:31:40.344629 coreos-metadata[1446]: May 14 01:31:40.344 WARN failed to locate config-drive, using the metadata service API instead May 14 01:31:40.393399 coreos-metadata[1446]: May 14 01:31:40.393 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 14 01:31:40.651689 coreos-metadata[1446]: May 14 01:31:40.651 INFO Fetch successful May 14 01:31:40.651689 coreos-metadata[1446]: May 14 01:31:40.651 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 14 01:31:40.666360 coreos-metadata[1446]: May 14 01:31:40.666 INFO Fetch successful May 14 01:31:40.666360 coreos-metadata[1446]: May 14 01:31:40.666 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 14 01:31:40.679620 coreos-metadata[1446]: May 14 01:31:40.679 INFO Fetch successful May 14 01:31:40.679620 coreos-metadata[1446]: May 14 01:31:40.679 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 14 01:31:40.693403 coreos-metadata[1446]: May 14 01:31:40.693 INFO Fetch successful May 14 01:31:40.693537 coreos-metadata[1446]: May 14 01:31:40.693 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 14 01:31:40.708331 coreos-metadata[1446]: May 14 01:31:40.708 INFO Fetch successful May 14 01:31:40.708331 coreos-metadata[1446]: May 14 01:31:40.708 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 14 01:31:40.724104 coreos-metadata[1446]: May 14 01:31:40.723 INFO Fetch successful May 14 01:31:40.774806 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 01:31:40.780315 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 01:31:40.789383 coreos-metadata[1511]: May 14 01:31:40.789 WARN failed to locate config-drive, using the metadata service API instead May 14 01:31:40.830612 coreos-metadata[1511]: May 14 01:31:40.830 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 14 01:31:40.848300 coreos-metadata[1511]: May 14 01:31:40.848 INFO Fetch successful May 14 01:31:40.848300 coreos-metadata[1511]: May 14 01:31:40.848 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 14 01:31:40.865164 coreos-metadata[1511]: May 14 01:31:40.865 INFO Fetch successful May 14 01:31:40.871033 unknown[1511]: wrote ssh authorized keys file for user: core May 14 01:31:40.940257 update-ssh-keys[1648]: Updated "/home/core/.ssh/authorized_keys" May 14 01:31:40.942980 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 01:31:40.946516 systemd[1]: Finished sshkeys.service. May 14 01:31:40.952909 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 01:31:40.953592 systemd[1]: Startup finished in 1.268s (kernel) + 15.015s (initrd) + 11.186s (userspace) = 27.470s. May 14 01:31:49.378668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 01:31:49.381861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:31:49.723269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:31:49.729376 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:31:49.867323 kubelet[1658]: E0514 01:31:49.867199 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:31:49.875111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:31:49.875454 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:31:49.876266 systemd[1]: kubelet.service: Consumed 284ms CPU time, 96.4M memory peak. May 14 01:31:49.959508 systemd[1]: Started sshd@3-172.24.4.47:22-172.24.4.1:43404.service - OpenSSH per-connection server daemon (172.24.4.1:43404). May 14 01:31:51.102176 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 43404 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:51.104846 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:51.115880 systemd-logind[1457]: New session 6 of user core. May 14 01:31:51.128561 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 01:31:51.743671 sshd[1669]: Connection closed by 172.24.4.1 port 43404 May 14 01:31:51.744693 sshd-session[1667]: pam_unix(sshd:session): session closed for user core May 14 01:31:51.764502 systemd[1]: sshd@3-172.24.4.47:22-172.24.4.1:43404.service: Deactivated successfully. May 14 01:31:51.768520 systemd[1]: session-6.scope: Deactivated successfully. May 14 01:31:51.770270 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. May 14 01:31:51.774697 systemd[1]: Started sshd@4-172.24.4.47:22-172.24.4.1:43418.service - OpenSSH per-connection server daemon (172.24.4.1:43418). May 14 01:31:51.777192 systemd-logind[1457]: Removed session 6. May 14 01:31:52.906839 sshd[1674]: Accepted publickey for core from 172.24.4.1 port 43418 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:52.909581 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:52.922190 systemd-logind[1457]: New session 7 of user core. May 14 01:31:52.929395 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 01:31:53.550109 sshd[1677]: Connection closed by 172.24.4.1 port 43418 May 14 01:31:53.552238 sshd-session[1674]: pam_unix(sshd:session): session closed for user core May 14 01:31:53.563583 systemd[1]: sshd@4-172.24.4.47:22-172.24.4.1:43418.service: Deactivated successfully. May 14 01:31:53.567828 systemd[1]: session-7.scope: Deactivated successfully. May 14 01:31:53.569870 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. May 14 01:31:53.574794 systemd[1]: Started sshd@5-172.24.4.47:22-172.24.4.1:37886.service - OpenSSH per-connection server daemon (172.24.4.1:37886). May 14 01:31:53.578251 systemd-logind[1457]: Removed session 7. May 14 01:31:54.866816 sshd[1682]: Accepted publickey for core from 172.24.4.1 port 37886 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:54.869636 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:54.880185 systemd-logind[1457]: New session 8 of user core. May 14 01:31:54.889372 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 01:31:55.509226 sshd[1685]: Connection closed by 172.24.4.1 port 37886 May 14 01:31:55.509000 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 14 01:31:55.531771 systemd[1]: sshd@5-172.24.4.47:22-172.24.4.1:37886.service: Deactivated successfully. May 14 01:31:55.535557 systemd[1]: session-8.scope: Deactivated successfully. May 14 01:31:55.537620 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. May 14 01:31:55.542699 systemd[1]: Started sshd@6-172.24.4.47:22-172.24.4.1:37894.service - OpenSSH per-connection server daemon (172.24.4.1:37894). May 14 01:31:55.545117 systemd-logind[1457]: Removed session 8. May 14 01:31:56.789442 sshd[1690]: Accepted publickey for core from 172.24.4.1 port 37894 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:56.792037 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:56.804162 systemd-logind[1457]: New session 9 of user core. May 14 01:31:56.812418 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 01:31:57.291976 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 01:31:57.292661 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:31:57.315160 sudo[1694]: pam_unix(sudo:session): session closed for user root May 14 01:31:57.520851 sshd[1693]: Connection closed by 172.24.4.1 port 37894 May 14 01:31:57.521893 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 14 01:31:57.538966 systemd[1]: sshd@6-172.24.4.47:22-172.24.4.1:37894.service: Deactivated successfully. May 14 01:31:57.542520 systemd[1]: session-9.scope: Deactivated successfully. May 14 01:31:57.544851 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. May 14 01:31:57.549723 systemd[1]: Started sshd@7-172.24.4.47:22-172.24.4.1:37900.service - OpenSSH per-connection server daemon (172.24.4.1:37900). May 14 01:31:57.552995 systemd-logind[1457]: Removed session 9. May 14 01:31:58.962844 sshd[1699]: Accepted publickey for core from 172.24.4.1 port 37900 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:31:58.966003 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:31:58.978906 systemd-logind[1457]: New session 10 of user core. May 14 01:31:58.989494 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 01:31:59.523962 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 01:31:59.524995 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:31:59.533943 sudo[1704]: pam_unix(sudo:session): session closed for user root May 14 01:31:59.545665 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 01:31:59.546334 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:31:59.567395 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:31:59.633372 augenrules[1726]: No rules May 14 01:31:59.635468 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:31:59.635720 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:31:59.637984 sudo[1703]: pam_unix(sudo:session): session closed for user root May 14 01:31:59.833942 sshd[1702]: Connection closed by 172.24.4.1 port 37900 May 14 01:31:59.834800 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 14 01:31:59.853667 systemd[1]: sshd@7-172.24.4.47:22-172.24.4.1:37900.service: Deactivated successfully. May 14 01:31:59.857467 systemd[1]: session-10.scope: Deactivated successfully. May 14 01:31:59.859830 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. May 14 01:31:59.864539 systemd[1]: Started sshd@8-172.24.4.47:22-172.24.4.1:37904.service - OpenSSH per-connection server daemon (172.24.4.1:37904). May 14 01:31:59.867418 systemd-logind[1457]: Removed session 10. May 14 01:31:59.886262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 01:31:59.890661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:00.133681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:00.148498 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:32:00.328010 kubelet[1744]: E0514 01:32:00.327929 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:32:00.332933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:32:00.333312 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:32:00.333954 systemd[1]: kubelet.service: Consumed 255ms CPU time, 97.8M memory peak. May 14 01:32:01.136035 sshd[1734]: Accepted publickey for core from 172.24.4.1 port 37904 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:32:01.138952 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:32:01.151610 systemd-logind[1457]: New session 11 of user core. May 14 01:32:01.162374 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 01:32:01.561770 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 01:32:01.562457 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:32:02.425471 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 01:32:02.440698 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 01:32:03.075561 dockerd[1772]: time="2025-05-14T01:32:03.075315743Z" level=info msg="Starting up" May 14 01:32:03.078622 dockerd[1772]: time="2025-05-14T01:32:03.078309772Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 01:32:03.190768 dockerd[1772]: time="2025-05-14T01:32:03.190699745Z" level=info msg="Loading containers: start." May 14 01:32:03.375115 kernel: Initializing XFRM netlink socket May 14 01:32:03.482243 systemd-networkd[1387]: docker0: Link UP May 14 01:32:03.565734 dockerd[1772]: time="2025-05-14T01:32:03.565592009Z" level=info msg="Loading containers: done." May 14 01:32:03.596139 dockerd[1772]: time="2025-05-14T01:32:03.592926511Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 01:32:03.596139 dockerd[1772]: time="2025-05-14T01:32:03.593119390Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 01:32:03.596139 dockerd[1772]: time="2025-05-14T01:32:03.593308454Z" level=info msg="Daemon has completed initialization" May 14 01:32:03.597369 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1822925636-merged.mount: Deactivated successfully. May 14 01:32:03.664249 dockerd[1772]: time="2025-05-14T01:32:03.663695415Z" level=info msg="API listen on /run/docker.sock" May 14 01:32:03.663835 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 01:32:05.419713 containerd[1483]: time="2025-05-14T01:32:05.419016283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 01:32:06.204759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425118770.mount: Deactivated successfully. May 14 01:32:07.913813 containerd[1483]: time="2025-05-14T01:32:07.913684225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:07.915961 containerd[1483]: time="2025-05-14T01:32:07.915705262Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 14 01:32:07.917500 containerd[1483]: time="2025-05-14T01:32:07.917382863Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:07.923151 containerd[1483]: time="2025-05-14T01:32:07.922950558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:07.924282 containerd[1483]: time="2025-05-14T01:32:07.924082849Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.504917816s" May 14 01:32:07.924282 containerd[1483]: time="2025-05-14T01:32:07.924135411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 01:32:07.928154 containerd[1483]: time="2025-05-14T01:32:07.927943115Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 01:32:10.237142 containerd[1483]: time="2025-05-14T01:32:10.236113575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:10.242732 containerd[1483]: time="2025-05-14T01:32:10.238653161Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 14 01:32:10.242732 containerd[1483]: time="2025-05-14T01:32:10.242326493Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:10.246350 containerd[1483]: time="2025-05-14T01:32:10.246254613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:10.247282 containerd[1483]: time="2025-05-14T01:32:10.247143026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.319167608s" May 14 01:32:10.247282 containerd[1483]: time="2025-05-14T01:32:10.247176227Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 01:32:10.264266 containerd[1483]: time="2025-05-14T01:32:10.263908981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 01:32:10.372980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 01:32:10.380622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:10.821871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:10.835979 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:32:10.928619 kubelet[2037]: E0514 01:32:10.928483 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:32:10.930163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:32:10.930309 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:32:10.930628 systemd[1]: kubelet.service: Consumed 260ms CPU time, 94.2M memory peak. May 14 01:32:12.475142 containerd[1483]: time="2025-05-14T01:32:12.474955570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:12.476647 containerd[1483]: time="2025-05-14T01:32:12.476552489Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 14 01:32:12.477657 containerd[1483]: time="2025-05-14T01:32:12.477622745Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:12.481204 containerd[1483]: time="2025-05-14T01:32:12.481132624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:12.483161 containerd[1483]: time="2025-05-14T01:32:12.483104598Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.219087966s" May 14 01:32:12.483655 containerd[1483]: time="2025-05-14T01:32:12.483333154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 01:32:12.484256 containerd[1483]: time="2025-05-14T01:32:12.484136251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 01:32:13.818547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599960189.mount: Deactivated successfully. May 14 01:32:14.383115 containerd[1483]: time="2025-05-14T01:32:14.383034025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:14.384110 containerd[1483]: time="2025-05-14T01:32:14.383988638Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 14 01:32:14.385181 containerd[1483]: time="2025-05-14T01:32:14.385114933Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:14.387184 containerd[1483]: time="2025-05-14T01:32:14.387136268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:14.387756 containerd[1483]: time="2025-05-14T01:32:14.387708288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.903465977s" May 14 01:32:14.387756 containerd[1483]: time="2025-05-14T01:32:14.387756862Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 01:32:14.389294 containerd[1483]: time="2025-05-14T01:32:14.389260437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 01:32:15.442416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336323343.mount: Deactivated successfully. May 14 01:32:17.140092 containerd[1483]: time="2025-05-14T01:32:17.139896719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:17.142832 containerd[1483]: time="2025-05-14T01:32:17.142696352Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 14 01:32:17.144125 containerd[1483]: time="2025-05-14T01:32:17.143968383Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:17.152364 containerd[1483]: time="2025-05-14T01:32:17.152209993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:17.154676 containerd[1483]: time="2025-05-14T01:32:17.154357726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.765043984s" May 14 01:32:17.154676 containerd[1483]: time="2025-05-14T01:32:17.154461211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 01:32:17.165331 containerd[1483]: time="2025-05-14T01:32:17.165226001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 01:32:17.715366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716965555.mount: Deactivated successfully. May 14 01:32:17.723122 containerd[1483]: time="2025-05-14T01:32:17.722943019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:32:17.725562 containerd[1483]: time="2025-05-14T01:32:17.725365389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 14 01:32:17.727440 containerd[1483]: time="2025-05-14T01:32:17.727337389Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:32:17.732187 containerd[1483]: time="2025-05-14T01:32:17.731997455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:32:17.734133 containerd[1483]: time="2025-05-14T01:32:17.733849177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 568.537094ms" May 14 01:32:17.734133 containerd[1483]: time="2025-05-14T01:32:17.733917496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 01:32:17.735499 containerd[1483]: time="2025-05-14T01:32:17.735013493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 01:32:18.287799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273970494.mount: Deactivated successfully. May 14 01:32:18.499219 update_engine[1458]: I20250514 01:32:18.498528 1458 update_attempter.cc:509] Updating boot flags... May 14 01:32:18.609140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2119) May 14 01:32:18.721110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2123) May 14 01:32:18.814148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2123) May 14 01:32:21.115222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 01:32:21.119264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:21.262344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:21.270382 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:32:21.323091 kubelet[2183]: E0514 01:32:21.322695 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:32:21.325139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:32:21.325292 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:32:21.325711 systemd[1]: kubelet.service: Consumed 149ms CPU time, 97.4M memory peak. May 14 01:32:21.805239 containerd[1483]: time="2025-05-14T01:32:21.805024601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:21.813207 containerd[1483]: time="2025-05-14T01:32:21.813017977Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 14 01:32:21.817409 containerd[1483]: time="2025-05-14T01:32:21.817305040Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:21.832271 containerd[1483]: time="2025-05-14T01:32:21.829621606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:21.836891 containerd[1483]: time="2025-05-14T01:32:21.836826410Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.101746076s" May 14 01:32:21.837849 containerd[1483]: time="2025-05-14T01:32:21.837803848Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 01:32:25.408676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:25.409163 systemd[1]: kubelet.service: Consumed 149ms CPU time, 97.4M memory peak. May 14 01:32:25.414168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:25.460701 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-11.scope)... May 14 01:32:25.460717 systemd[1]: Reloading... May 14 01:32:25.580102 zram_generator::config[2264]: No configuration found. May 14 01:32:25.889025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:32:26.010868 systemd[1]: Reloading finished in 549 ms. May 14 01:32:26.061115 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 01:32:26.061345 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 01:32:26.061575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:26.061616 systemd[1]: kubelet.service: Consumed 101ms CPU time, 83.6M memory peak. May 14 01:32:26.063482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:26.185731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:26.196537 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:32:26.258995 kubelet[2327]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:32:26.258995 kubelet[2327]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 01:32:26.258995 kubelet[2327]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:32:26.259440 kubelet[2327]: I0514 01:32:26.259166 2327 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:32:27.082322 kubelet[2327]: I0514 01:32:27.082238 2327 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 01:32:27.082322 kubelet[2327]: I0514 01:32:27.082299 2327 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:32:27.082873 kubelet[2327]: I0514 01:32:27.082830 2327 server.go:929] "Client rotation is on, will bootstrap in background" May 14 01:32:27.119004 kubelet[2327]: E0514 01:32:27.118957 2327 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:27.125104 kubelet[2327]: I0514 01:32:27.124409 2327 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:32:27.143521 kubelet[2327]: I0514 01:32:27.143492 2327 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:32:27.152683 kubelet[2327]: I0514 01:32:27.152646 2327 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:32:27.153671 kubelet[2327]: I0514 01:32:27.152932 2327 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 01:32:27.153671 kubelet[2327]: I0514 01:32:27.153131 2327 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:32:27.153671 kubelet[2327]: I0514 01:32:27.153164 2327 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-af44d751a9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:32:27.153671 kubelet[2327]: I0514 01:32:27.153374 2327 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:32:27.153891 kubelet[2327]: I0514 01:32:27.153385 2327 container_manager_linux.go:300] "Creating device plugin manager" May 14 01:32:27.153891 kubelet[2327]: I0514 01:32:27.153497 2327 state_mem.go:36] "Initialized new in-memory state store" May 14 01:32:27.157129 kubelet[2327]: I0514 01:32:27.156938 2327 kubelet.go:408] "Attempting to sync node with API server" May 14 01:32:27.157129 kubelet[2327]: I0514 01:32:27.156966 2327 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:32:27.157129 kubelet[2327]: I0514 01:32:27.157001 2327 kubelet.go:314] "Adding apiserver pod source" May 14 01:32:27.157129 kubelet[2327]: I0514 01:32:27.157020 2327 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:32:27.177136 kubelet[2327]: W0514 01:32:27.176882 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-af44d751a9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:27.177136 kubelet[2327]: E0514 01:32:27.177013 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-af44d751a9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:27.177313 kubelet[2327]: I0514 01:32:27.177224 2327 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:32:27.181674 kubelet[2327]: I0514 01:32:27.181411 2327 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:32:27.182029 kubelet[2327]: W0514 01:32:27.181724 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:27.182029 kubelet[2327]: E0514 01:32:27.181782 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:27.183115 kubelet[2327]: W0514 01:32:27.182948 2327 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 01:32:27.184755 kubelet[2327]: I0514 01:32:27.184411 2327 server.go:1269] "Started kubelet" May 14 01:32:27.187734 kubelet[2327]: I0514 01:32:27.187688 2327 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:32:27.194092 kubelet[2327]: E0514 01:32:27.189555 2327 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.47:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-af44d751a9.novalocal.183f40b9cc55ef50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-af44d751a9.novalocal,UID:ci-4284-0-0-n-af44d751a9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-af44d751a9.novalocal,},FirstTimestamp:2025-05-14 01:32:27.184353104 +0000 UTC m=+0.983384854,LastTimestamp:2025-05-14 01:32:27.184353104 +0000 UTC m=+0.983384854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-af44d751a9.novalocal,}" May 14 01:32:27.195233 kubelet[2327]: I0514 01:32:27.195197 2327 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:32:27.196437 kubelet[2327]: I0514 01:32:27.196420 2327 server.go:460] "Adding debug handlers to kubelet server" May 14 01:32:27.197986 kubelet[2327]: I0514 01:32:27.197934 2327 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:32:27.198253 kubelet[2327]: I0514 01:32:27.198215 2327 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 01:32:27.198411 kubelet[2327]: I0514 01:32:27.198397 2327 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:32:27.198672 kubelet[2327]: E0514 01:32:27.198629 2327 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-af44d751a9.novalocal\" not found" May 14 01:32:27.198775 kubelet[2327]: I0514 01:32:27.198761 2327 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:32:27.199271 kubelet[2327]: I0514 01:32:27.199235 2327 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 01:32:27.199376 kubelet[2327]: I0514 01:32:27.199351 2327 reconciler.go:26] "Reconciler: start to sync state" May 14 01:32:27.201453 kubelet[2327]: I0514 01:32:27.201408 2327 factory.go:221] Registration of the systemd container factory successfully May 14 01:32:27.201632 kubelet[2327]: I0514 01:32:27.201590 2327 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:32:27.203109 kubelet[2327]: W0514 01:32:27.202248 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:27.203109 kubelet[2327]: E0514 01:32:27.202361 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:27.203109 kubelet[2327]: E0514 01:32:27.202515 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-af44d751a9.novalocal?timeout=10s\": dial tcp 172.24.4.47:6443: connect: connection refused" interval="200ms" May 14 01:32:27.204806 kubelet[2327]: I0514 01:32:27.204767 2327 factory.go:221] Registration of the containerd container factory successfully May 14 01:32:27.211122 kubelet[2327]: E0514 01:32:27.211091 2327 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:32:27.231640 kubelet[2327]: I0514 01:32:27.231616 2327 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 01:32:27.231640 kubelet[2327]: I0514 01:32:27.231635 2327 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 01:32:27.231778 kubelet[2327]: I0514 01:32:27.231653 2327 state_mem.go:36] "Initialized new in-memory state store" May 14 01:32:27.247097 kubelet[2327]: I0514 01:32:27.246016 2327 policy_none.go:49] "None policy: Start" May 14 01:32:27.248124 kubelet[2327]: I0514 01:32:27.247803 2327 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 01:32:27.248124 kubelet[2327]: I0514 01:32:27.247829 2327 state_mem.go:35] "Initializing new in-memory state store" May 14 01:32:27.248124 kubelet[2327]: I0514 01:32:27.248022 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:32:27.250050 kubelet[2327]: I0514 01:32:27.250026 2327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:32:27.250399 kubelet[2327]: I0514 01:32:27.250134 2327 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 01:32:27.250399 kubelet[2327]: I0514 01:32:27.250158 2327 kubelet.go:2321] "Starting kubelet main sync loop" May 14 01:32:27.250399 kubelet[2327]: E0514 01:32:27.250201 2327 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:32:27.256751 kubelet[2327]: W0514 01:32:27.256722 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:27.256909 kubelet[2327]: E0514 01:32:27.256888 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:27.261893 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 01:32:27.273832 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 01:32:27.278301 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 01:32:27.289876 kubelet[2327]: I0514 01:32:27.289853 2327 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:32:27.291508 kubelet[2327]: I0514 01:32:27.290747 2327 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:32:27.291508 kubelet[2327]: I0514 01:32:27.290773 2327 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:32:27.291508 kubelet[2327]: I0514 01:32:27.291320 2327 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:32:27.293511 kubelet[2327]: E0514 01:32:27.293483 2327 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-af44d751a9.novalocal\" not found" May 14 01:32:27.378391 systemd[1]: Created slice kubepods-burstable-pod3a7cff6640286c79e470cf3c7b9f1e03.slice - libcontainer container kubepods-burstable-pod3a7cff6640286c79e470cf3c7b9f1e03.slice. May 14 01:32:27.395448 kubelet[2327]: I0514 01:32:27.394338 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.395448 kubelet[2327]: E0514 01:32:27.395314 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.47:6443/api/v1/nodes\": dial tcp 172.24.4.47:6443: connect: connection refused" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.400522 systemd[1]: Created slice kubepods-burstable-pod6e9105c0b545fb48e9547921803f738d.slice - libcontainer container kubepods-burstable-pod6e9105c0b545fb48e9547921803f738d.slice. May 14 01:32:27.404661 kubelet[2327]: E0514 01:32:27.403901 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-af44d751a9.novalocal?timeout=10s\": dial tcp 172.24.4.47:6443: connect: connection refused" interval="400ms" May 14 01:32:27.410423 systemd[1]: Created slice kubepods-burstable-pod9f89cd22c8ef289dfc3ff5e34723df1c.slice - libcontainer container kubepods-burstable-pod9f89cd22c8ef289dfc3ff5e34723df1c.slice. May 14 01:32:27.501328 kubelet[2327]: I0514 01:32:27.501260 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.501328 kubelet[2327]: I0514 01:32:27.501341 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.501901 kubelet[2327]: I0514 01:32:27.501417 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.501901 kubelet[2327]: I0514 01:32:27.501491 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e9105c0b545fb48e9547921803f738d-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"6e9105c0b545fb48e9547921803f738d\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.501901 kubelet[2327]: I0514 01:32:27.501557 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.501901 kubelet[2327]: I0514 01:32:27.501649 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.502220 kubelet[2327]: I0514 01:32:27.501709 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.502220 kubelet[2327]: I0514 01:32:27.501750 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.502220 kubelet[2327]: I0514 01:32:27.501791 2327 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.599041 kubelet[2327]: I0514 01:32:27.598973 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.599906 kubelet[2327]: E0514 01:32:27.599842 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.47:6443/api/v1/nodes\": dial tcp 172.24.4.47:6443: connect: connection refused" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:27.699152 containerd[1483]: time="2025-05-14T01:32:27.698902832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal,Uid:3a7cff6640286c79e470cf3c7b9f1e03,Namespace:kube-system,Attempt:0,}" May 14 01:32:27.707055 containerd[1483]: time="2025-05-14T01:32:27.706716376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal,Uid:6e9105c0b545fb48e9547921803f738d,Namespace:kube-system,Attempt:0,}" May 14 01:32:27.717724 containerd[1483]: time="2025-05-14T01:32:27.717656708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal,Uid:9f89cd22c8ef289dfc3ff5e34723df1c,Namespace:kube-system,Attempt:0,}" May 14 01:32:27.795931 containerd[1483]: time="2025-05-14T01:32:27.795737902Z" level=info msg="connecting to shim 5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41" address="unix:///run/containerd/s/b0c6873dac99ea55605a13480a8f5323d7dc5e3b1b5da73afd6bff13b630e153" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:27.806016 containerd[1483]: time="2025-05-14T01:32:27.805514753Z" level=info msg="connecting to shim dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35" address="unix:///run/containerd/s/6806c38ce1d6be6b9b81e51d36866026b993d5504e8fd6cab7c651979b361489" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:27.806297 kubelet[2327]: E0514 01:32:27.805963 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-af44d751a9.novalocal?timeout=10s\": dial tcp 172.24.4.47:6443: connect: connection refused" interval="800ms" May 14 01:32:27.843310 systemd[1]: Started cri-containerd-5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41.scope - libcontainer container 5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41. May 14 01:32:27.848942 systemd[1]: Started cri-containerd-dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35.scope - libcontainer container dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35. May 14 01:32:28.003664 kubelet[2327]: I0514 01:32:28.003513 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:28.004419 kubelet[2327]: E0514 01:32:28.004335 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.47:6443/api/v1/nodes\": dial tcp 172.24.4.47:6443: connect: connection refused" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:28.297141 kubelet[2327]: W0514 01:32:28.296721 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:28.297141 kubelet[2327]: E0514 01:32:28.296819 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:28.307428 kubelet[2327]: W0514 01:32:28.307314 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-af44d751a9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:28.307563 kubelet[2327]: E0514 01:32:28.307446 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-af44d751a9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:28.512685 kubelet[2327]: W0514 01:32:28.512509 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:28.512685 kubelet[2327]: E0514 01:32:28.512629 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:28.607980 kubelet[2327]: E0514 01:32:28.607195 2327 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-af44d751a9.novalocal?timeout=10s\": dial tcp 172.24.4.47:6443: connect: connection refused" interval="1.6s" May 14 01:32:28.777988 kubelet[2327]: W0514 01:32:28.777798 2327 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.47:6443: connect: connection refused May 14 01:32:28.777988 kubelet[2327]: E0514 01:32:28.777934 2327 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:28.807658 kubelet[2327]: I0514 01:32:28.807578 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:28.808253 kubelet[2327]: E0514 01:32:28.808177 2327 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.47:6443/api/v1/nodes\": dial tcp 172.24.4.47:6443: connect: connection refused" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:28.962480 containerd[1483]: time="2025-05-14T01:32:28.962296953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal,Uid:6e9105c0b545fb48e9547921803f738d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41\"" May 14 01:32:28.970821 containerd[1483]: time="2025-05-14T01:32:28.970606001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal,Uid:3a7cff6640286c79e470cf3c7b9f1e03,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35\"" May 14 01:32:28.973911 containerd[1483]: time="2025-05-14T01:32:28.973777720Z" level=info msg="CreateContainer within sandbox \"5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 01:32:28.981407 containerd[1483]: time="2025-05-14T01:32:28.981319908Z" level=info msg="CreateContainer within sandbox \"dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 01:32:29.010904 containerd[1483]: time="2025-05-14T01:32:29.010157494Z" level=info msg="connecting to shim 896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a" address="unix:///run/containerd/s/77c698591a18acae80a55611985043d0c0b0eea15560daae2acc007baf082a0b" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:29.036147 containerd[1483]: time="2025-05-14T01:32:29.035961376Z" level=info msg="Container 67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:29.049086 containerd[1483]: time="2025-05-14T01:32:29.048150414Z" level=info msg="Container 747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:29.066254 systemd[1]: Started cri-containerd-896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a.scope - libcontainer container 896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a. May 14 01:32:29.071703 containerd[1483]: time="2025-05-14T01:32:29.071639911Z" level=info msg="CreateContainer within sandbox \"dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66\"" May 14 01:32:29.073854 containerd[1483]: time="2025-05-14T01:32:29.073778868Z" level=info msg="CreateContainer within sandbox \"5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92\"" May 14 01:32:29.074668 containerd[1483]: time="2025-05-14T01:32:29.074640856Z" level=info msg="StartContainer for \"67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92\"" May 14 01:32:29.074976 containerd[1483]: time="2025-05-14T01:32:29.074937844Z" level=info msg="StartContainer for \"747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66\"" May 14 01:32:29.076316 containerd[1483]: time="2025-05-14T01:32:29.076278803Z" level=info msg="connecting to shim 747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66" address="unix:///run/containerd/s/6806c38ce1d6be6b9b81e51d36866026b993d5504e8fd6cab7c651979b361489" protocol=ttrpc version=3 May 14 01:32:29.077014 containerd[1483]: time="2025-05-14T01:32:29.076954839Z" level=info msg="connecting to shim 67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92" address="unix:///run/containerd/s/b0c6873dac99ea55605a13480a8f5323d7dc5e3b1b5da73afd6bff13b630e153" protocol=ttrpc version=3 May 14 01:32:29.104272 systemd[1]: Started cri-containerd-67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92.scope - libcontainer container 67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92. May 14 01:32:29.113324 systemd[1]: Started cri-containerd-747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66.scope - libcontainer container 747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66. May 14 01:32:29.168942 containerd[1483]: time="2025-05-14T01:32:29.168668741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal,Uid:9f89cd22c8ef289dfc3ff5e34723df1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a\"" May 14 01:32:29.173854 containerd[1483]: time="2025-05-14T01:32:29.173806866Z" level=info msg="CreateContainer within sandbox \"896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 01:32:29.192816 containerd[1483]: time="2025-05-14T01:32:29.192749257Z" level=info msg="Container d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:29.196028 kubelet[2327]: E0514 01:32:29.195891 2327 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.47:6443: connect: connection refused" logger="UnhandledError" May 14 01:32:29.211419 containerd[1483]: time="2025-05-14T01:32:29.211205520Z" level=info msg="StartContainer for \"67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92\" returns successfully" May 14 01:32:29.212139 containerd[1483]: time="2025-05-14T01:32:29.211547627Z" level=info msg="StartContainer for \"747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66\" returns successfully" May 14 01:32:29.225416 containerd[1483]: time="2025-05-14T01:32:29.224035356Z" level=info msg="CreateContainer within sandbox \"896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92\"" May 14 01:32:29.226694 containerd[1483]: time="2025-05-14T01:32:29.226631715Z" level=info msg="StartContainer for \"d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92\"" May 14 01:32:29.228648 containerd[1483]: time="2025-05-14T01:32:29.228588558Z" level=info msg="connecting to shim d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92" address="unix:///run/containerd/s/77c698591a18acae80a55611985043d0c0b0eea15560daae2acc007baf082a0b" protocol=ttrpc version=3 May 14 01:32:29.254288 systemd[1]: Started cri-containerd-d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92.scope - libcontainer container d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92. May 14 01:32:29.357809 containerd[1483]: time="2025-05-14T01:32:29.357318696Z" level=info msg="StartContainer for \"d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92\" returns successfully" May 14 01:32:30.412798 kubelet[2327]: I0514 01:32:30.412250 2327 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:31.637179 kubelet[2327]: I0514 01:32:31.634590 2327 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:32.184121 kubelet[2327]: I0514 01:32:32.183977 2327 apiserver.go:52] "Watching apiserver" May 14 01:32:32.199944 kubelet[2327]: I0514 01:32:32.199853 2327 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 01:32:32.641268 kubelet[2327]: W0514 01:32:32.641216 2327 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:33.995974 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-11.scope)... May 14 01:32:33.996011 systemd[1]: Reloading... May 14 01:32:34.136257 zram_generator::config[2642]: No configuration found. May 14 01:32:34.283463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:32:34.430332 systemd[1]: Reloading finished in 433 ms. May 14 01:32:34.464512 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:34.472548 systemd[1]: kubelet.service: Deactivated successfully. May 14 01:32:34.472895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:34.472945 systemd[1]: kubelet.service: Consumed 1.512s CPU time, 118.2M memory peak. May 14 01:32:34.475225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:32:34.678296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:32:34.695423 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:32:34.771170 kubelet[2703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:32:34.771170 kubelet[2703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 01:32:34.771170 kubelet[2703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:32:34.772276 kubelet[2703]: I0514 01:32:34.771262 2703 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:32:34.779171 kubelet[2703]: I0514 01:32:34.779109 2703 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 01:32:34.779171 kubelet[2703]: I0514 01:32:34.779149 2703 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:32:34.779451 kubelet[2703]: I0514 01:32:34.779428 2703 server.go:929] "Client rotation is on, will bootstrap in background" May 14 01:32:34.782293 kubelet[2703]: I0514 01:32:34.782018 2703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 01:32:34.784926 kubelet[2703]: I0514 01:32:34.784766 2703 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:32:34.790101 kubelet[2703]: I0514 01:32:34.789931 2703 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:32:34.795077 kubelet[2703]: I0514 01:32:34.793924 2703 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:32:34.795077 kubelet[2703]: I0514 01:32:34.794042 2703 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 01:32:34.795077 kubelet[2703]: I0514 01:32:34.794160 2703 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794194 2703 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-af44d751a9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794489 2703 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794507 2703 container_manager_linux.go:300] "Creating device plugin manager" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794540 2703 state_mem.go:36] "Initialized new in-memory state store" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794631 2703 kubelet.go:408] "Attempting to sync node with API server" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794651 2703 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794679 2703 kubelet.go:314] "Adding apiserver pod source" May 14 01:32:34.795239 kubelet[2703]: I0514 01:32:34.794694 2703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:32:34.805027 kubelet[2703]: I0514 01:32:34.804979 2703 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:32:34.805537 kubelet[2703]: I0514 01:32:34.805516 2703 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:32:34.806050 kubelet[2703]: I0514 01:32:34.806032 2703 server.go:1269] "Started kubelet" May 14 01:32:34.812474 kubelet[2703]: I0514 01:32:34.812448 2703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:32:34.821560 kubelet[2703]: I0514 01:32:34.821503 2703 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:32:34.822714 kubelet[2703]: I0514 01:32:34.822698 2703 server.go:460] "Adding debug handlers to kubelet server" May 14 01:32:34.823719 kubelet[2703]: I0514 01:32:34.821573 2703 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 01:32:34.823851 kubelet[2703]: I0514 01:32:34.823802 2703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:32:34.824115 kubelet[2703]: I0514 01:32:34.824100 2703 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:32:34.824389 kubelet[2703]: I0514 01:32:34.824374 2703 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:32:34.824561 kubelet[2703]: I0514 01:32:34.821554 2703 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 01:32:34.824995 kubelet[2703]: I0514 01:32:34.824982 2703 reconciler.go:26] "Reconciler: start to sync state" May 14 01:32:34.825266 kubelet[2703]: I0514 01:32:34.825251 2703 factory.go:221] Registration of the systemd container factory successfully May 14 01:32:34.825420 kubelet[2703]: I0514 01:32:34.825402 2703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:32:34.826817 kubelet[2703]: I0514 01:32:34.826738 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:32:34.827995 kubelet[2703]: E0514 01:32:34.827964 2703 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:32:34.828876 kubelet[2703]: I0514 01:32:34.828859 2703 factory.go:221] Registration of the containerd container factory successfully May 14 01:32:34.828957 kubelet[2703]: I0514 01:32:34.828931 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:32:34.828997 kubelet[2703]: I0514 01:32:34.828964 2703 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 01:32:34.828997 kubelet[2703]: I0514 01:32:34.828988 2703 kubelet.go:2321] "Starting kubelet main sync loop" May 14 01:32:34.829054 kubelet[2703]: E0514 01:32:34.829029 2703 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:32:34.903987 kubelet[2703]: I0514 01:32:34.903927 2703 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 01:32:34.904229 kubelet[2703]: I0514 01:32:34.904215 2703 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 01:32:34.904299 kubelet[2703]: I0514 01:32:34.904290 2703 state_mem.go:36] "Initialized new in-memory state store" May 14 01:32:34.904516 kubelet[2703]: I0514 01:32:34.904501 2703 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 01:32:34.904640 kubelet[2703]: I0514 01:32:34.904572 2703 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 01:32:34.904640 kubelet[2703]: I0514 01:32:34.904600 2703 policy_none.go:49] "None policy: Start" May 14 01:32:34.906143 kubelet[2703]: I0514 01:32:34.905446 2703 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 01:32:34.906143 kubelet[2703]: I0514 01:32:34.905466 2703 state_mem.go:35] "Initializing new in-memory state store" May 14 01:32:34.906143 kubelet[2703]: I0514 01:32:34.905653 2703 state_mem.go:75] "Updated machine memory state" May 14 01:32:34.913710 kubelet[2703]: I0514 01:32:34.913690 2703 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:32:34.914418 kubelet[2703]: I0514 01:32:34.914408 2703 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:32:34.914683 kubelet[2703]: I0514 01:32:34.914646 2703 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:32:34.915008 kubelet[2703]: I0514 01:32:34.914994 2703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:32:34.947703 kubelet[2703]: W0514 01:32:34.947375 2703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:34.949908 kubelet[2703]: W0514 01:32:34.949658 2703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:34.957054 kubelet[2703]: W0514 01:32:34.956873 2703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:34.957054 kubelet[2703]: E0514 01:32:34.956965 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025556 kubelet[2703]: I0514 01:32:35.025518 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e9105c0b545fb48e9547921803f738d-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"6e9105c0b545fb48e9547921803f738d\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025556 kubelet[2703]: I0514 01:32:35.025559 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025584 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025612 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025636 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025656 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a7cff6640286c79e470cf3c7b9f1e03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"3a7cff6640286c79e470cf3c7b9f1e03\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025686 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025707 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.025790 kubelet[2703]: I0514 01:32:35.025726 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f89cd22c8ef289dfc3ff5e34723df1c-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" (UID: \"9f89cd22c8ef289dfc3ff5e34723df1c\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.029405 kubelet[2703]: I0514 01:32:35.029205 2703 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.044611 kubelet[2703]: I0514 01:32:35.044500 2703 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.044611 kubelet[2703]: I0514 01:32:35.044584 2703 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.802386 kubelet[2703]: I0514 01:32:35.802313 2703 apiserver.go:52] "Watching apiserver" May 14 01:32:35.824581 kubelet[2703]: I0514 01:32:35.824529 2703 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 01:32:35.895131 kubelet[2703]: W0514 01:32:35.894472 2703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:35.895131 kubelet[2703]: E0514 01:32:35.894621 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.897585 kubelet[2703]: W0514 01:32:35.897006 2703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:32:35.900956 kubelet[2703]: E0514 01:32:35.899273 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:32:35.954670 kubelet[2703]: I0514 01:32:35.954385 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-af44d751a9.novalocal" podStartSLOduration=1.9543705249999999 podStartE2EDuration="1.954370525s" podCreationTimestamp="2025-05-14 01:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:32:35.954091427 +0000 UTC m=+1.252277220" watchObservedRunningTime="2025-05-14 01:32:35.954370525 +0000 UTC m=+1.252556318" May 14 01:32:36.019654 kubelet[2703]: I0514 01:32:36.019571 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-af44d751a9.novalocal" podStartSLOduration=2.01955431 podStartE2EDuration="2.01955431s" podCreationTimestamp="2025-05-14 01:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:32:36.019272859 +0000 UTC m=+1.317458652" watchObservedRunningTime="2025-05-14 01:32:36.01955431 +0000 UTC m=+1.317740103" May 14 01:32:36.112553 kubelet[2703]: I0514 01:32:36.112366 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-af44d751a9.novalocal" podStartSLOduration=4.112349767 podStartE2EDuration="4.112349767s" podCreationTimestamp="2025-05-14 01:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:32:36.079694435 +0000 UTC m=+1.377880238" watchObservedRunningTime="2025-05-14 01:32:36.112349767 +0000 UTC m=+1.410535571" May 14 01:32:39.462332 kubelet[2703]: I0514 01:32:39.462188 2703 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 01:32:39.463479 containerd[1483]: time="2025-05-14T01:32:39.462891782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 01:32:39.463733 kubelet[2703]: I0514 01:32:39.463187 2703 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 01:32:40.157234 systemd[1]: Created slice kubepods-besteffort-poda53e373e_07bd_49c7_adf0_7a231af99aac.slice - libcontainer container kubepods-besteffort-poda53e373e_07bd_49c7_adf0_7a231af99aac.slice. May 14 01:32:40.172427 kubelet[2703]: I0514 01:32:40.172218 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53e373e-07bd-49c7-adf0-7a231af99aac-lib-modules\") pod \"kube-proxy-8smtq\" (UID: \"a53e373e-07bd-49c7-adf0-7a231af99aac\") " pod="kube-system/kube-proxy-8smtq" May 14 01:32:40.172427 kubelet[2703]: I0514 01:32:40.172283 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46zg\" (UniqueName: \"kubernetes.io/projected/a53e373e-07bd-49c7-adf0-7a231af99aac-kube-api-access-n46zg\") pod \"kube-proxy-8smtq\" (UID: \"a53e373e-07bd-49c7-adf0-7a231af99aac\") " pod="kube-system/kube-proxy-8smtq" May 14 01:32:40.172427 kubelet[2703]: I0514 01:32:40.172311 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a53e373e-07bd-49c7-adf0-7a231af99aac-kube-proxy\") pod \"kube-proxy-8smtq\" (UID: \"a53e373e-07bd-49c7-adf0-7a231af99aac\") " pod="kube-system/kube-proxy-8smtq" May 14 01:32:40.172427 kubelet[2703]: I0514 01:32:40.172376 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a53e373e-07bd-49c7-adf0-7a231af99aac-xtables-lock\") pod \"kube-proxy-8smtq\" (UID: \"a53e373e-07bd-49c7-adf0-7a231af99aac\") " pod="kube-system/kube-proxy-8smtq" May 14 01:32:40.292025 kubelet[2703]: E0514 01:32:40.290240 2703 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 14 01:32:40.292025 kubelet[2703]: E0514 01:32:40.290265 2703 projected.go:194] Error preparing data for projected volume kube-api-access-n46zg for pod kube-system/kube-proxy-8smtq: configmap "kube-root-ca.crt" not found May 14 01:32:40.292025 kubelet[2703]: E0514 01:32:40.290339 2703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a53e373e-07bd-49c7-adf0-7a231af99aac-kube-api-access-n46zg podName:a53e373e-07bd-49c7-adf0-7a231af99aac nodeName:}" failed. No retries permitted until 2025-05-14 01:32:40.790319148 +0000 UTC m=+6.088504941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n46zg" (UniqueName: "kubernetes.io/projected/a53e373e-07bd-49c7-adf0-7a231af99aac-kube-api-access-n46zg") pod "kube-proxy-8smtq" (UID: "a53e373e-07bd-49c7-adf0-7a231af99aac") : configmap "kube-root-ca.crt" not found May 14 01:32:40.664543 systemd[1]: Created slice kubepods-besteffort-podc425151f_d3ce_41cf_8fa1_764639cbe9dc.slice - libcontainer container kubepods-besteffort-podc425151f_d3ce_41cf_8fa1_764639cbe9dc.slice. May 14 01:32:40.677666 kubelet[2703]: I0514 01:32:40.677588 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c425151f-d3ce-41cf-8fa1-764639cbe9dc-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-tstd6\" (UID: \"c425151f-d3ce-41cf-8fa1-764639cbe9dc\") " pod="tigera-operator/tigera-operator-6f6897fdc5-tstd6" May 14 01:32:40.677666 kubelet[2703]: I0514 01:32:40.677623 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrm9\" (UniqueName: \"kubernetes.io/projected/c425151f-d3ce-41cf-8fa1-764639cbe9dc-kube-api-access-4zrm9\") pod \"tigera-operator-6f6897fdc5-tstd6\" (UID: \"c425151f-d3ce-41cf-8fa1-764639cbe9dc\") " pod="tigera-operator/tigera-operator-6f6897fdc5-tstd6" May 14 01:32:40.969150 containerd[1483]: time="2025-05-14T01:32:40.968609361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-tstd6,Uid:c425151f-d3ce-41cf-8fa1-764639cbe9dc,Namespace:tigera-operator,Attempt:0,}" May 14 01:32:41.071518 containerd[1483]: time="2025-05-14T01:32:41.071389137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8smtq,Uid:a53e373e-07bd-49c7-adf0-7a231af99aac,Namespace:kube-system,Attempt:0,}" May 14 01:32:41.371828 containerd[1483]: time="2025-05-14T01:32:41.371745084Z" level=info msg="connecting to shim 1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08" address="unix:///run/containerd/s/6b6163c020b849b4bcb4bc6f04ba0adeb14f27e95084a05751e3316715eaa67f" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:41.382951 containerd[1483]: time="2025-05-14T01:32:41.382867352Z" level=info msg="connecting to shim 2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00" address="unix:///run/containerd/s/89227311e4b134dfb7c2591bd27a9e99cd5b50f1f803ec95ad169741a1d86858" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:41.425270 systemd[1]: Started cri-containerd-1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08.scope - libcontainer container 1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08. May 14 01:32:41.430041 systemd[1]: Started cri-containerd-2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00.scope - libcontainer container 2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00. May 14 01:32:41.465320 containerd[1483]: time="2025-05-14T01:32:41.465281480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8smtq,Uid:a53e373e-07bd-49c7-adf0-7a231af99aac,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00\"" May 14 01:32:41.476535 containerd[1483]: time="2025-05-14T01:32:41.476502044Z" level=info msg="CreateContainer within sandbox \"2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 01:32:41.493733 containerd[1483]: time="2025-05-14T01:32:41.493700350Z" level=info msg="Container 92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:41.495166 containerd[1483]: time="2025-05-14T01:32:41.494983550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-tstd6,Uid:c425151f-d3ce-41cf-8fa1-764639cbe9dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08\"" May 14 01:32:41.496995 containerd[1483]: time="2025-05-14T01:32:41.496831414Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 01:32:41.507011 containerd[1483]: time="2025-05-14T01:32:41.506978186Z" level=info msg="CreateContainer within sandbox \"2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d\"" May 14 01:32:41.508793 containerd[1483]: time="2025-05-14T01:32:41.507705080Z" level=info msg="StartContainer for \"92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d\"" May 14 01:32:41.509325 containerd[1483]: time="2025-05-14T01:32:41.509266103Z" level=info msg="connecting to shim 92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d" address="unix:///run/containerd/s/89227311e4b134dfb7c2591bd27a9e99cd5b50f1f803ec95ad169741a1d86858" protocol=ttrpc version=3 May 14 01:32:41.529224 systemd[1]: Started cri-containerd-92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d.scope - libcontainer container 92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d. May 14 01:32:41.575055 containerd[1483]: time="2025-05-14T01:32:41.575005404Z" level=info msg="StartContainer for \"92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d\" returns successfully" May 14 01:32:42.597777 sudo[1754]: pam_unix(sudo:session): session closed for user root May 14 01:32:42.792275 sshd[1753]: Connection closed by 172.24.4.1 port 37904 May 14 01:32:42.793770 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 14 01:32:42.796900 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. May 14 01:32:42.798737 systemd[1]: sshd@8-172.24.4.47:22-172.24.4.1:37904.service: Deactivated successfully. May 14 01:32:42.802825 systemd[1]: session-11.scope: Deactivated successfully. May 14 01:32:42.803289 systemd[1]: session-11.scope: Consumed 6.575s CPU time, 222.4M memory peak. May 14 01:32:42.809030 systemd-logind[1457]: Removed session 11. May 14 01:32:43.539979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457421283.mount: Deactivated successfully. May 14 01:32:44.334696 kubelet[2703]: I0514 01:32:44.333468 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8smtq" podStartSLOduration=4.333449151 podStartE2EDuration="4.333449151s" podCreationTimestamp="2025-05-14 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:32:41.923247994 +0000 UTC m=+7.221433837" watchObservedRunningTime="2025-05-14 01:32:44.333449151 +0000 UTC m=+9.631634954" May 14 01:32:44.531614 containerd[1483]: time="2025-05-14T01:32:44.531555679Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:44.533123 containerd[1483]: time="2025-05-14T01:32:44.532964989Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 01:32:44.534798 containerd[1483]: time="2025-05-14T01:32:44.534700316Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:44.537921 containerd[1483]: time="2025-05-14T01:32:44.537843459Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:44.538912 containerd[1483]: time="2025-05-14T01:32:44.538743460Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.041878146s" May 14 01:32:44.538912 containerd[1483]: time="2025-05-14T01:32:44.538787722Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 01:32:44.542760 containerd[1483]: time="2025-05-14T01:32:44.542654681Z" level=info msg="CreateContainer within sandbox \"1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 01:32:44.558705 containerd[1483]: time="2025-05-14T01:32:44.558645832Z" level=info msg="Container e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:44.570409 containerd[1483]: time="2025-05-14T01:32:44.570352197Z" level=info msg="CreateContainer within sandbox \"1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0\"" May 14 01:32:44.572259 containerd[1483]: time="2025-05-14T01:32:44.572221323Z" level=info msg="StartContainer for \"e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0\"" May 14 01:32:44.573561 containerd[1483]: time="2025-05-14T01:32:44.573515283Z" level=info msg="connecting to shim e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0" address="unix:///run/containerd/s/6b6163c020b849b4bcb4bc6f04ba0adeb14f27e95084a05751e3316715eaa67f" protocol=ttrpc version=3 May 14 01:32:44.602224 systemd[1]: Started cri-containerd-e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0.scope - libcontainer container e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0. May 14 01:32:44.636700 containerd[1483]: time="2025-05-14T01:32:44.636645496Z" level=info msg="StartContainer for \"e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0\" returns successfully" May 14 01:32:44.946120 kubelet[2703]: I0514 01:32:44.945663 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-tstd6" podStartSLOduration=1.9014939229999999 podStartE2EDuration="4.945634311s" podCreationTimestamp="2025-05-14 01:32:40 +0000 UTC" firstStartedPulling="2025-05-14 01:32:41.49628603 +0000 UTC m=+6.794471834" lastFinishedPulling="2025-05-14 01:32:44.540426419 +0000 UTC m=+9.838612222" observedRunningTime="2025-05-14 01:32:44.943945681 +0000 UTC m=+10.242131524" watchObservedRunningTime="2025-05-14 01:32:44.945634311 +0000 UTC m=+10.243820154" May 14 01:32:48.107711 systemd[1]: Created slice kubepods-besteffort-pod45a8cbc9_d615_4021_8703_48c55f1863b5.slice - libcontainer container kubepods-besteffort-pod45a8cbc9_d615_4021_8703_48c55f1863b5.slice. May 14 01:32:48.131405 kubelet[2703]: I0514 01:32:48.131347 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdcm\" (UniqueName: \"kubernetes.io/projected/45a8cbc9-d615-4021-8703-48c55f1863b5-kube-api-access-6vdcm\") pod \"calico-typha-7db5446874-zrr6j\" (UID: \"45a8cbc9-d615-4021-8703-48c55f1863b5\") " pod="calico-system/calico-typha-7db5446874-zrr6j" May 14 01:32:48.131405 kubelet[2703]: I0514 01:32:48.131397 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/45a8cbc9-d615-4021-8703-48c55f1863b5-typha-certs\") pod \"calico-typha-7db5446874-zrr6j\" (UID: \"45a8cbc9-d615-4021-8703-48c55f1863b5\") " pod="calico-system/calico-typha-7db5446874-zrr6j" May 14 01:32:48.131872 kubelet[2703]: I0514 01:32:48.131427 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45a8cbc9-d615-4021-8703-48c55f1863b5-tigera-ca-bundle\") pod \"calico-typha-7db5446874-zrr6j\" (UID: \"45a8cbc9-d615-4021-8703-48c55f1863b5\") " pod="calico-system/calico-typha-7db5446874-zrr6j" May 14 01:32:48.222087 systemd[1]: Created slice kubepods-besteffort-pod1fbbbf9e_2f60_4f11_80f4_007ebc302f1a.slice - libcontainer container kubepods-besteffort-pod1fbbbf9e_2f60_4f11_80f4_007ebc302f1a.slice. May 14 01:32:48.232169 kubelet[2703]: I0514 01:32:48.232136 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-var-run-calico\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.232450 kubelet[2703]: I0514 01:32:48.232397 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-cni-net-dir\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.232568 kubelet[2703]: I0514 01:32:48.232554 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-node-certs\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233071 kubelet[2703]: I0514 01:32:48.233008 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-xtables-lock\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233071 kubelet[2703]: I0514 01:32:48.233038 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-tigera-ca-bundle\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233353 kubelet[2703]: I0514 01:32:48.233242 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4tp\" (UniqueName: \"kubernetes.io/projected/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-kube-api-access-4h4tp\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233580 kubelet[2703]: I0514 01:32:48.233432 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-cni-bin-dir\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233673 kubelet[2703]: I0514 01:32:48.233659 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-lib-modules\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233823 kubelet[2703]: I0514 01:32:48.233770 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-policysync\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233966 kubelet[2703]: I0514 01:32:48.233890 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-var-lib-calico\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.233966 kubelet[2703]: I0514 01:32:48.233921 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-cni-log-dir\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.235105 kubelet[2703]: I0514 01:32:48.233939 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1fbbbf9e-2f60-4f11-80f4-007ebc302f1a-flexvol-driver-host\") pod \"calico-node-2kmgw\" (UID: \"1fbbbf9e-2f60-4f11-80f4-007ebc302f1a\") " pod="calico-system/calico-node-2kmgw" May 14 01:32:48.335950 kubelet[2703]: E0514 01:32:48.335904 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.336222 kubelet[2703]: W0514 01:32:48.336201 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.336423 kubelet[2703]: E0514 01:32:48.336392 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.337210 kubelet[2703]: E0514 01:32:48.337195 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.337315 kubelet[2703]: W0514 01:32:48.337302 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.337477 kubelet[2703]: E0514 01:32:48.337463 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.337867 kubelet[2703]: E0514 01:32:48.337824 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.337925 kubelet[2703]: W0514 01:32:48.337867 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.337925 kubelet[2703]: E0514 01:32:48.337895 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.338145 kubelet[2703]: E0514 01:32:48.338127 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.338145 kubelet[2703]: W0514 01:32:48.338141 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.338213 kubelet[2703]: E0514 01:32:48.338152 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.338496 kubelet[2703]: E0514 01:32:48.338378 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.338496 kubelet[2703]: W0514 01:32:48.338392 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.338496 kubelet[2703]: E0514 01:32:48.338403 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.339007 kubelet[2703]: E0514 01:32:48.338810 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.339007 kubelet[2703]: W0514 01:32:48.338826 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.339007 kubelet[2703]: E0514 01:32:48.338842 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.341468 kubelet[2703]: E0514 01:32:48.341226 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.341468 kubelet[2703]: W0514 01:32:48.341251 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.341468 kubelet[2703]: E0514 01:32:48.341276 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.344114 kubelet[2703]: E0514 01:32:48.342037 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.344114 kubelet[2703]: W0514 01:32:48.342085 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.344114 kubelet[2703]: E0514 01:32:48.342131 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.344114 kubelet[2703]: E0514 01:32:48.343254 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.344114 kubelet[2703]: W0514 01:32:48.343265 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.344114 kubelet[2703]: E0514 01:32:48.343275 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.348377 kubelet[2703]: E0514 01:32:48.347884 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:48.349885 kubelet[2703]: E0514 01:32:48.349736 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.349885 kubelet[2703]: W0514 01:32:48.349765 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.349885 kubelet[2703]: E0514 01:32:48.349780 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.387373 kubelet[2703]: E0514 01:32:48.384275 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.387373 kubelet[2703]: W0514 01:32:48.384296 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.387373 kubelet[2703]: E0514 01:32:48.384316 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.414324 containerd[1483]: time="2025-05-14T01:32:48.414274619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7db5446874-zrr6j,Uid:45a8cbc9-d615-4021-8703-48c55f1863b5,Namespace:calico-system,Attempt:0,}" May 14 01:32:48.419830 kubelet[2703]: E0514 01:32:48.419800 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.419830 kubelet[2703]: W0514 01:32:48.419825 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.419830 kubelet[2703]: E0514 01:32:48.419844 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.420636 kubelet[2703]: E0514 01:32:48.420572 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.420636 kubelet[2703]: W0514 01:32:48.420583 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.420636 kubelet[2703]: E0514 01:32:48.420594 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.420852 kubelet[2703]: E0514 01:32:48.420803 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.420852 kubelet[2703]: W0514 01:32:48.420816 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.420852 kubelet[2703]: E0514 01:32:48.420845 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.421251 kubelet[2703]: E0514 01:32:48.421010 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.421251 kubelet[2703]: W0514 01:32:48.421023 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.421251 kubelet[2703]: E0514 01:32:48.421035 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.421251 kubelet[2703]: E0514 01:32:48.421223 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.421251 kubelet[2703]: W0514 01:32:48.421244 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.421251 kubelet[2703]: E0514 01:32:48.421254 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.421514 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423335 kubelet[2703]: W0514 01:32:48.421525 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.421535 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422127 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423335 kubelet[2703]: W0514 01:32:48.422137 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422148 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422393 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423335 kubelet[2703]: W0514 01:32:48.422402 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422412 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422655 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423335 kubelet[2703]: W0514 01:32:48.422664 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.422674 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.423221 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423335 kubelet[2703]: W0514 01:32:48.423231 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423335 kubelet[2703]: E0514 01:32:48.423241 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.423716 kubelet[2703]: E0514 01:32:48.423423 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.423716 kubelet[2703]: W0514 01:32:48.423466 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.423716 kubelet[2703]: E0514 01:32:48.423479 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.424644 kubelet[2703]: E0514 01:32:48.424614 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.424644 kubelet[2703]: W0514 01:32:48.424629 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.424644 kubelet[2703]: E0514 01:32:48.424640 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.424874 kubelet[2703]: E0514 01:32:48.424842 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.424874 kubelet[2703]: W0514 01:32:48.424854 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.424874 kubelet[2703]: E0514 01:32:48.424864 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.425287 kubelet[2703]: E0514 01:32:48.425243 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.425287 kubelet[2703]: W0514 01:32:48.425266 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.425287 kubelet[2703]: E0514 01:32:48.425277 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.425476 kubelet[2703]: E0514 01:32:48.425459 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.425476 kubelet[2703]: W0514 01:32:48.425472 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.425713 kubelet[2703]: E0514 01:32:48.425483 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.425752 kubelet[2703]: E0514 01:32:48.425733 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.425752 kubelet[2703]: W0514 01:32:48.425743 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.425802 kubelet[2703]: E0514 01:32:48.425753 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.425963 kubelet[2703]: E0514 01:32:48.425947 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.425963 kubelet[2703]: W0514 01:32:48.425961 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.426221 kubelet[2703]: E0514 01:32:48.425970 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.426307 kubelet[2703]: E0514 01:32:48.426289 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.426307 kubelet[2703]: W0514 01:32:48.426299 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.426526 kubelet[2703]: E0514 01:32:48.426309 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.426693 kubelet[2703]: E0514 01:32:48.426675 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.426693 kubelet[2703]: W0514 01:32:48.426691 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.427153 kubelet[2703]: E0514 01:32:48.426706 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.427153 kubelet[2703]: E0514 01:32:48.426880 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.427153 kubelet[2703]: W0514 01:32:48.426888 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.427153 kubelet[2703]: E0514 01:32:48.426898 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.435512 kubelet[2703]: E0514 01:32:48.435473 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.435512 kubelet[2703]: W0514 01:32:48.435511 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.435681 kubelet[2703]: E0514 01:32:48.435536 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.435681 kubelet[2703]: I0514 01:32:48.435568 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c9eb405-0467-45ae-98a3-2e9aaf13c0d6-registration-dir\") pod \"csi-node-driver-nj48f\" (UID: \"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6\") " pod="calico-system/csi-node-driver-nj48f" May 14 01:32:48.436247 kubelet[2703]: E0514 01:32:48.435839 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.436247 kubelet[2703]: W0514 01:32:48.435856 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.436247 kubelet[2703]: E0514 01:32:48.435867 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.436247 kubelet[2703]: I0514 01:32:48.435884 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c9eb405-0467-45ae-98a3-2e9aaf13c0d6-kubelet-dir\") pod \"csi-node-driver-nj48f\" (UID: \"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6\") " pod="calico-system/csi-node-driver-nj48f" May 14 01:32:48.436247 kubelet[2703]: E0514 01:32:48.436041 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.436247 kubelet[2703]: W0514 01:32:48.436053 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.436247 kubelet[2703]: E0514 01:32:48.436091 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.436247 kubelet[2703]: I0514 01:32:48.436116 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rlvp\" (UniqueName: \"kubernetes.io/projected/0c9eb405-0467-45ae-98a3-2e9aaf13c0d6-kube-api-access-6rlvp\") pod \"csi-node-driver-nj48f\" (UID: \"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6\") " pod="calico-system/csi-node-driver-nj48f" May 14 01:32:48.437231 kubelet[2703]: E0514 01:32:48.437022 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.437231 kubelet[2703]: W0514 01:32:48.437102 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.437231 kubelet[2703]: E0514 01:32:48.437136 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.437717 kubelet[2703]: E0514 01:32:48.437521 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.437717 kubelet[2703]: W0514 01:32:48.437747 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.437717 kubelet[2703]: E0514 01:32:48.437782 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.438740 kubelet[2703]: E0514 01:32:48.438638 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.438740 kubelet[2703]: W0514 01:32:48.438652 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.438740 kubelet[2703]: E0514 01:32:48.438728 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.439007 kubelet[2703]: E0514 01:32:48.438991 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.439232 kubelet[2703]: W0514 01:32:48.439106 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.439232 kubelet[2703]: E0514 01:32:48.439184 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.439232 kubelet[2703]: I0514 01:32:48.439216 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c9eb405-0467-45ae-98a3-2e9aaf13c0d6-varrun\") pod \"csi-node-driver-nj48f\" (UID: \"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6\") " pod="calico-system/csi-node-driver-nj48f" May 14 01:32:48.439536 kubelet[2703]: E0514 01:32:48.439523 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.441615 kubelet[2703]: W0514 01:32:48.439692 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.441615 kubelet[2703]: E0514 01:32:48.439727 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.445174 kubelet[2703]: E0514 01:32:48.445146 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.445817 kubelet[2703]: W0514 01:32:48.445602 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.445817 kubelet[2703]: E0514 01:32:48.445634 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.446390 kubelet[2703]: E0514 01:32:48.446274 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.446390 kubelet[2703]: W0514 01:32:48.446290 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.446390 kubelet[2703]: E0514 01:32:48.446308 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.448622 kubelet[2703]: E0514 01:32:48.448192 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.448622 kubelet[2703]: W0514 01:32:48.448213 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.448622 kubelet[2703]: E0514 01:32:48.448231 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.448622 kubelet[2703]: E0514 01:32:48.448479 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.448622 kubelet[2703]: W0514 01:32:48.448490 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.448622 kubelet[2703]: E0514 01:32:48.448501 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.449981 kubelet[2703]: E0514 01:32:48.449627 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.449981 kubelet[2703]: W0514 01:32:48.449641 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.449981 kubelet[2703]: E0514 01:32:48.449759 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.449981 kubelet[2703]: I0514 01:32:48.449801 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c9eb405-0467-45ae-98a3-2e9aaf13c0d6-socket-dir\") pod \"csi-node-driver-nj48f\" (UID: \"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6\") " pod="calico-system/csi-node-driver-nj48f" May 14 01:32:48.451120 kubelet[2703]: E0514 01:32:48.450259 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.451120 kubelet[2703]: W0514 01:32:48.450274 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.451120 kubelet[2703]: E0514 01:32:48.450286 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.451420 kubelet[2703]: E0514 01:32:48.451359 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.451420 kubelet[2703]: W0514 01:32:48.451375 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.451420 kubelet[2703]: E0514 01:32:48.451390 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.460047 containerd[1483]: time="2025-05-14T01:32:48.459795157Z" level=info msg="connecting to shim f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144" address="unix:///run/containerd/s/85237b79660fde33513d6c6319ed9e64fd4d9ad21e27d211e3a1938cd1c62775" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:48.490733 systemd[1]: Started cri-containerd-f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144.scope - libcontainer container f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144. May 14 01:32:48.527829 containerd[1483]: time="2025-05-14T01:32:48.527703823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kmgw,Uid:1fbbbf9e-2f60-4f11-80f4-007ebc302f1a,Namespace:calico-system,Attempt:0,}" May 14 01:32:48.552627 kubelet[2703]: E0514 01:32:48.552412 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.552627 kubelet[2703]: W0514 01:32:48.552432 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.552627 kubelet[2703]: E0514 01:32:48.552463 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.552814 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.553835 kubelet[2703]: W0514 01:32:48.552849 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.552861 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.553048 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.553835 kubelet[2703]: W0514 01:32:48.553057 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.553091 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.553616 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.553835 kubelet[2703]: W0514 01:32:48.553627 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.553835 kubelet[2703]: E0514 01:32:48.553690 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.554473 kubelet[2703]: E0514 01:32:48.553936 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.554473 kubelet[2703]: W0514 01:32:48.553945 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.554473 kubelet[2703]: E0514 01:32:48.553962 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.554473 kubelet[2703]: E0514 01:32:48.554216 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.554473 kubelet[2703]: W0514 01:32:48.554226 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.554717 kubelet[2703]: E0514 01:32:48.554644 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.555017 kubelet[2703]: E0514 01:32:48.554978 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.555017 kubelet[2703]: W0514 01:32:48.554992 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.555419 kubelet[2703]: E0514 01:32:48.555176 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.555458 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.557197 kubelet[2703]: W0514 01:32:48.555467 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.555581 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.556559 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.557197 kubelet[2703]: W0514 01:32:48.556569 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.556701 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.556880 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.557197 kubelet[2703]: W0514 01:32:48.556889 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.556965 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.557109 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.557197 kubelet[2703]: W0514 01:32:48.557118 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.557197 kubelet[2703]: E0514 01:32:48.557178 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.557564 kubelet[2703]: E0514 01:32:48.557377 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.557564 kubelet[2703]: W0514 01:32:48.557387 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.557620 kubelet[2703]: E0514 01:32:48.557608 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.559536 kubelet[2703]: E0514 01:32:48.559444 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.559536 kubelet[2703]: W0514 01:32:48.559462 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.560340 kubelet[2703]: E0514 01:32:48.559599 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.560340 kubelet[2703]: E0514 01:32:48.559860 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.560340 kubelet[2703]: W0514 01:32:48.559870 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.560340 kubelet[2703]: E0514 01:32:48.559940 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.560340 kubelet[2703]: E0514 01:32:48.560102 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.560340 kubelet[2703]: W0514 01:32:48.560111 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.560340 kubelet[2703]: E0514 01:32:48.560177 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560369 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.561773 kubelet[2703]: W0514 01:32:48.560378 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560460 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560658 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.561773 kubelet[2703]: W0514 01:32:48.560668 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560745 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560899 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.561773 kubelet[2703]: W0514 01:32:48.560909 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.560990 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.561632 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.561773 kubelet[2703]: W0514 01:32:48.561642 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.561773 kubelet[2703]: E0514 01:32:48.561658 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.561841 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.563359 kubelet[2703]: W0514 01:32:48.561851 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.561941 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.562801 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.563359 kubelet[2703]: W0514 01:32:48.562811 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.562856 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.563028 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.563359 kubelet[2703]: W0514 01:32:48.563051 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.563359 kubelet[2703]: E0514 01:32:48.563116 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.563461 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.564873 kubelet[2703]: W0514 01:32:48.563471 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.564191 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.564325 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.564873 kubelet[2703]: W0514 01:32:48.564336 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.564357 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.564610 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.564873 kubelet[2703]: W0514 01:32:48.564621 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.564873 kubelet[2703]: E0514 01:32:48.564630 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.583488 containerd[1483]: time="2025-05-14T01:32:48.583321942Z" level=info msg="connecting to shim b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa" address="unix:///run/containerd/s/04150baaf3b670a3752b057412bdbfb8ddf1b121c7e5977181c06ca34b6bdb6f" namespace=k8s.io protocol=ttrpc version=3 May 14 01:32:48.588206 containerd[1483]: time="2025-05-14T01:32:48.587256987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7db5446874-zrr6j,Uid:45a8cbc9-d615-4021-8703-48c55f1863b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144\"" May 14 01:32:48.588322 kubelet[2703]: E0514 01:32:48.587905 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:48.588322 kubelet[2703]: W0514 01:32:48.587924 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:48.588322 kubelet[2703]: E0514 01:32:48.587946 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:48.593810 containerd[1483]: time="2025-05-14T01:32:48.593763450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 01:32:48.637292 systemd[1]: Started cri-containerd-b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa.scope - libcontainer container b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa. May 14 01:32:48.683753 containerd[1483]: time="2025-05-14T01:32:48.683538489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kmgw,Uid:1fbbbf9e-2f60-4f11-80f4-007ebc302f1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\"" May 14 01:32:49.831390 kubelet[2703]: E0514 01:32:49.830714 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:51.715303 containerd[1483]: time="2025-05-14T01:32:51.714622064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:51.716698 containerd[1483]: time="2025-05-14T01:32:51.716637614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 01:32:51.718140 containerd[1483]: time="2025-05-14T01:32:51.718091642Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:51.720638 containerd[1483]: time="2025-05-14T01:32:51.720591785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:51.721637 containerd[1483]: time="2025-05-14T01:32:51.721229541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.127418794s" May 14 01:32:51.721637 containerd[1483]: time="2025-05-14T01:32:51.721268971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 01:32:51.722574 containerd[1483]: time="2025-05-14T01:32:51.722204267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 01:32:51.734783 containerd[1483]: time="2025-05-14T01:32:51.734742804Z" level=info msg="CreateContainer within sandbox \"f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 01:32:51.748847 containerd[1483]: time="2025-05-14T01:32:51.748815594Z" level=info msg="Container 7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:51.765187 containerd[1483]: time="2025-05-14T01:32:51.765131540Z" level=info msg="CreateContainer within sandbox \"f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52\"" May 14 01:32:51.765811 containerd[1483]: time="2025-05-14T01:32:51.765785790Z" level=info msg="StartContainer for \"7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52\"" May 14 01:32:51.767300 containerd[1483]: time="2025-05-14T01:32:51.767262866Z" level=info msg="connecting to shim 7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52" address="unix:///run/containerd/s/85237b79660fde33513d6c6319ed9e64fd4d9ad21e27d211e3a1938cd1c62775" protocol=ttrpc version=3 May 14 01:32:51.791226 systemd[1]: Started cri-containerd-7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52.scope - libcontainer container 7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52. May 14 01:32:51.831649 kubelet[2703]: E0514 01:32:51.829920 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:51.848516 containerd[1483]: time="2025-05-14T01:32:51.848463187Z" level=info msg="StartContainer for \"7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52\" returns successfully" May 14 01:32:51.954832 kubelet[2703]: E0514 01:32:51.954726 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.954832 kubelet[2703]: W0514 01:32:51.954751 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.954832 kubelet[2703]: E0514 01:32:51.954769 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.955424 kubelet[2703]: E0514 01:32:51.955307 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.955424 kubelet[2703]: W0514 01:32:51.955319 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.955424 kubelet[2703]: E0514 01:32:51.955330 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.956037 kubelet[2703]: E0514 01:32:51.955981 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.957208 kubelet[2703]: W0514 01:32:51.957192 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.957344 kubelet[2703]: E0514 01:32:51.957291 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.957768 kubelet[2703]: E0514 01:32:51.957733 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.958721 kubelet[2703]: W0514 01:32:51.957745 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.958721 kubelet[2703]: E0514 01:32:51.958246 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.959014 kubelet[2703]: E0514 01:32:51.958927 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.959014 kubelet[2703]: W0514 01:32:51.958942 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.959014 kubelet[2703]: E0514 01:32:51.958952 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.961176 kubelet[2703]: E0514 01:32:51.961119 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.961176 kubelet[2703]: W0514 01:32:51.961133 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.961176 kubelet[2703]: E0514 01:32:51.961144 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.961553 kubelet[2703]: E0514 01:32:51.961469 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.961553 kubelet[2703]: W0514 01:32:51.961501 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.961553 kubelet[2703]: E0514 01:32:51.961512 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.961929 kubelet[2703]: E0514 01:32:51.961855 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.961929 kubelet[2703]: W0514 01:32:51.961866 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.961929 kubelet[2703]: E0514 01:32:51.961876 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.962303 kubelet[2703]: E0514 01:32:51.962204 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.962303 kubelet[2703]: W0514 01:32:51.962216 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.962303 kubelet[2703]: E0514 01:32:51.962226 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.962584 kubelet[2703]: E0514 01:32:51.962522 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.962584 kubelet[2703]: W0514 01:32:51.962533 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.962584 kubelet[2703]: E0514 01:32:51.962542 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.962953 kubelet[2703]: E0514 01:32:51.962856 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.962953 kubelet[2703]: W0514 01:32:51.962880 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.962953 kubelet[2703]: E0514 01:32:51.962891 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.963309 kubelet[2703]: E0514 01:32:51.963180 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.963309 kubelet[2703]: W0514 01:32:51.963191 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.963309 kubelet[2703]: E0514 01:32:51.963201 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.964840 kubelet[2703]: E0514 01:32:51.964743 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.964840 kubelet[2703]: W0514 01:32:51.964755 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.964840 kubelet[2703]: E0514 01:32:51.964766 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.965053 kubelet[2703]: E0514 01:32:51.965006 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.965053 kubelet[2703]: W0514 01:32:51.965016 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.965053 kubelet[2703]: E0514 01:32:51.965026 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.965496 kubelet[2703]: E0514 01:32:51.965371 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.965496 kubelet[2703]: W0514 01:32:51.965382 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.965496 kubelet[2703]: E0514 01:32:51.965393 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.985077 kubelet[2703]: E0514 01:32:51.984984 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.985077 kubelet[2703]: W0514 01:32:51.985006 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.985077 kubelet[2703]: E0514 01:32:51.985024 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.987290 kubelet[2703]: E0514 01:32:51.986836 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.987290 kubelet[2703]: W0514 01:32:51.986863 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.987290 kubelet[2703]: E0514 01:32:51.986884 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.988006 kubelet[2703]: E0514 01:32:51.987516 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988006 kubelet[2703]: W0514 01:32:51.987532 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988006 kubelet[2703]: E0514 01:32:51.987708 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988006 kubelet[2703]: W0514 01:32:51.987716 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988006 kubelet[2703]: E0514 01:32:51.987728 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.988006 kubelet[2703]: E0514 01:32:51.987896 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988006 kubelet[2703]: W0514 01:32:51.987905 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988006 kubelet[2703]: E0514 01:32:51.987914 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.988254 kubelet[2703]: E0514 01:32:51.988111 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988254 kubelet[2703]: W0514 01:32:51.988120 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988254 kubelet[2703]: E0514 01:32:51.988129 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.988592 kubelet[2703]: E0514 01:32:51.988322 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988592 kubelet[2703]: W0514 01:32:51.988337 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988592 kubelet[2703]: E0514 01:32:51.988347 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.988990 kubelet[2703]: E0514 01:32:51.988862 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.988990 kubelet[2703]: W0514 01:32:51.988876 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.988990 kubelet[2703]: E0514 01:32:51.988886 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.990205 kubelet[2703]: E0514 01:32:51.989295 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.990205 kubelet[2703]: W0514 01:32:51.989304 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.990205 kubelet[2703]: E0514 01:32:51.989313 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.990775 kubelet[2703]: E0514 01:32:51.990716 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.991242 kubelet[2703]: E0514 01:32:51.991217 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.991242 kubelet[2703]: W0514 01:32:51.991235 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.991479 kubelet[2703]: E0514 01:32:51.991260 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.991521 kubelet[2703]: E0514 01:32:51.991484 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.991521 kubelet[2703]: W0514 01:32:51.991495 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.991521 kubelet[2703]: E0514 01:32:51.991515 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.991804 kubelet[2703]: E0514 01:32:51.991759 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.991804 kubelet[2703]: W0514 01:32:51.991776 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.991804 kubelet[2703]: E0514 01:32:51.991791 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.992164 kubelet[2703]: E0514 01:32:51.992110 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.992164 kubelet[2703]: W0514 01:32:51.992128 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.992583 kubelet[2703]: E0514 01:32:51.992402 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.992583 kubelet[2703]: W0514 01:32:51.992411 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.992583 kubelet[2703]: E0514 01:32:51.992421 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.992583 kubelet[2703]: E0514 01:32:51.992452 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.992995 kubelet[2703]: E0514 01:32:51.992867 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.992995 kubelet[2703]: W0514 01:32:51.992882 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.992995 kubelet[2703]: E0514 01:32:51.992892 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.993506 kubelet[2703]: E0514 01:32:51.993430 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.993506 kubelet[2703]: W0514 01:32:51.993448 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.993506 kubelet[2703]: E0514 01:32:51.993458 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.994410 kubelet[2703]: E0514 01:32:51.994180 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.994410 kubelet[2703]: W0514 01:32:51.994192 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.994410 kubelet[2703]: E0514 01:32:51.994202 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:51.995187 kubelet[2703]: E0514 01:32:51.995175 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:51.995311 kubelet[2703]: W0514 01:32:51.995269 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:51.995311 kubelet[2703]: E0514 01:32:51.995285 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.951568 kubelet[2703]: I0514 01:32:52.951509 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:32:52.973233 kubelet[2703]: E0514 01:32:52.973152 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.973233 kubelet[2703]: W0514 01:32:52.973190 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.973233 kubelet[2703]: E0514 01:32:52.973226 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.973941 kubelet[2703]: E0514 01:32:52.973630 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.973941 kubelet[2703]: W0514 01:32:52.973656 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.973941 kubelet[2703]: E0514 01:32:52.973677 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.974474 kubelet[2703]: E0514 01:32:52.974117 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.974474 kubelet[2703]: W0514 01:32:52.974141 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.974474 kubelet[2703]: E0514 01:32:52.974163 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.974474 kubelet[2703]: E0514 01:32:52.974497 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.975118 kubelet[2703]: W0514 01:32:52.974518 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.975118 kubelet[2703]: E0514 01:32:52.974540 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.975472 kubelet[2703]: E0514 01:32:52.975245 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.975472 kubelet[2703]: W0514 01:32:52.975268 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.975472 kubelet[2703]: E0514 01:32:52.975294 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.976014 kubelet[2703]: E0514 01:32:52.975625 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.976014 kubelet[2703]: W0514 01:32:52.975645 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.976014 kubelet[2703]: E0514 01:32:52.975665 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.976014 kubelet[2703]: E0514 01:32:52.975980 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.976014 kubelet[2703]: W0514 01:32:52.976000 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.976781 kubelet[2703]: E0514 01:32:52.976021 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.976781 kubelet[2703]: E0514 01:32:52.976397 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.976781 kubelet[2703]: W0514 01:32:52.976416 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.976781 kubelet[2703]: E0514 01:32:52.976436 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.976781 kubelet[2703]: E0514 01:32:52.976786 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.976781 kubelet[2703]: W0514 01:32:52.976807 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.977745 kubelet[2703]: E0514 01:32:52.976833 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.977745 kubelet[2703]: E0514 01:32:52.977233 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.977745 kubelet[2703]: W0514 01:32:52.977254 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.977745 kubelet[2703]: E0514 01:32:52.977274 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.977745 kubelet[2703]: E0514 01:32:52.977586 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.977745 kubelet[2703]: W0514 01:32:52.977606 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.977745 kubelet[2703]: E0514 01:32:52.977626 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.978630 kubelet[2703]: E0514 01:32:52.977951 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.978630 kubelet[2703]: W0514 01:32:52.977971 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.978630 kubelet[2703]: E0514 01:32:52.977991 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.978630 kubelet[2703]: E0514 01:32:52.978372 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.978630 kubelet[2703]: W0514 01:32:52.978391 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.978630 kubelet[2703]: E0514 01:32:52.978411 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.979370 kubelet[2703]: E0514 01:32:52.978729 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.979370 kubelet[2703]: W0514 01:32:52.978753 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.979370 kubelet[2703]: E0514 01:32:52.978774 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.979370 kubelet[2703]: E0514 01:32:52.979254 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.979370 kubelet[2703]: W0514 01:32:52.979277 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.979370 kubelet[2703]: E0514 01:32:52.979301 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.996131 kubelet[2703]: E0514 01:32:52.995992 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.996131 kubelet[2703]: W0514 01:32:52.996040 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.996415 kubelet[2703]: E0514 01:32:52.996154 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.996756 kubelet[2703]: E0514 01:32:52.996725 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.996756 kubelet[2703]: W0514 01:32:52.996754 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.996955 kubelet[2703]: E0514 01:32:52.996811 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.997395 kubelet[2703]: E0514 01:32:52.997334 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.997395 kubelet[2703]: W0514 01:32:52.997371 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.997395 kubelet[2703]: E0514 01:32:52.997407 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.997806 kubelet[2703]: E0514 01:32:52.997766 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.997806 kubelet[2703]: W0514 01:32:52.997796 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.997967 kubelet[2703]: E0514 01:32:52.997857 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.998309 kubelet[2703]: E0514 01:32:52.998280 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.998309 kubelet[2703]: W0514 01:32:52.998308 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.998479 kubelet[2703]: E0514 01:32:52.998362 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.998782 kubelet[2703]: E0514 01:32:52.998751 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.998782 kubelet[2703]: W0514 01:32:52.998779 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.998977 kubelet[2703]: E0514 01:32:52.998881 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.999419 kubelet[2703]: E0514 01:32:52.999381 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.999419 kubelet[2703]: W0514 01:32:52.999415 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:52.999737 kubelet[2703]: E0514 01:32:52.999641 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:52.999879 kubelet[2703]: E0514 01:32:52.999775 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:52.999879 kubelet[2703]: W0514 01:32:52.999795 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.000008 kubelet[2703]: E0514 01:32:52.999923 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.000370 kubelet[2703]: E0514 01:32:53.000340 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.000370 kubelet[2703]: W0514 01:32:53.000368 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.000735 kubelet[2703]: E0514 01:32:53.000571 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.001214 kubelet[2703]: E0514 01:32:53.001180 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.001214 kubelet[2703]: W0514 01:32:53.001209 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.001543 kubelet[2703]: E0514 01:32:53.001242 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.001615 kubelet[2703]: E0514 01:32:53.001596 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.001694 kubelet[2703]: W0514 01:32:53.001617 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.001861 kubelet[2703]: E0514 01:32:53.001794 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.002116 kubelet[2703]: E0514 01:32:53.002032 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.002222 kubelet[2703]: W0514 01:32:53.002120 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.002292 kubelet[2703]: E0514 01:32:53.002260 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.002626 kubelet[2703]: E0514 01:32:53.002595 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.002626 kubelet[2703]: W0514 01:32:53.002625 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.002947 kubelet[2703]: E0514 01:32:53.002876 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.003169 kubelet[2703]: E0514 01:32:53.003126 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.003169 kubelet[2703]: W0514 01:32:53.003155 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.003589 kubelet[2703]: E0514 01:32:53.003201 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.003972 kubelet[2703]: E0514 01:32:53.003815 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.003972 kubelet[2703]: W0514 01:32:53.003842 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.003972 kubelet[2703]: E0514 01:32:53.003887 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.004494 kubelet[2703]: E0514 01:32:53.004460 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.004610 kubelet[2703]: W0514 01:32:53.004496 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.004610 kubelet[2703]: E0514 01:32:53.004534 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.004948 kubelet[2703]: E0514 01:32:53.004917 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.004948 kubelet[2703]: W0514 01:32:53.004945 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.005143 kubelet[2703]: E0514 01:32:53.004966 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.005716 kubelet[2703]: E0514 01:32:53.005684 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:32:53.005716 kubelet[2703]: W0514 01:32:53.005712 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:32:53.005888 kubelet[2703]: E0514 01:32:53.005735 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:32:53.830348 kubelet[2703]: E0514 01:32:53.830265 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:53.925420 containerd[1483]: time="2025-05-14T01:32:53.925122833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:53.929010 containerd[1483]: time="2025-05-14T01:32:53.928834052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 01:32:53.931753 containerd[1483]: time="2025-05-14T01:32:53.931395634Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:53.934028 containerd[1483]: time="2025-05-14T01:32:53.933968418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:32:53.934636 containerd[1483]: time="2025-05-14T01:32:53.934596150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.212363475s" May 14 01:32:53.934636 containerd[1483]: time="2025-05-14T01:32:53.934634077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 01:32:53.937459 containerd[1483]: time="2025-05-14T01:32:53.937404886Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 01:32:53.955104 containerd[1483]: time="2025-05-14T01:32:53.953365142Z" level=info msg="Container f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040: CDI devices from CRI Config.CDIDevices: []" May 14 01:32:53.984799 containerd[1483]: time="2025-05-14T01:32:53.984695235Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\"" May 14 01:32:53.986566 containerd[1483]: time="2025-05-14T01:32:53.986478678Z" level=info msg="StartContainer for \"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\"" May 14 01:32:53.994015 containerd[1483]: time="2025-05-14T01:32:53.993925005Z" level=info msg="connecting to shim f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040" address="unix:///run/containerd/s/04150baaf3b670a3752b057412bdbfb8ddf1b121c7e5977181c06ca34b6bdb6f" protocol=ttrpc version=3 May 14 01:32:54.025350 systemd[1]: Started cri-containerd-f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040.scope - libcontainer container f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040. May 14 01:32:54.084289 containerd[1483]: time="2025-05-14T01:32:54.084039187Z" level=info msg="StartContainer for \"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\" returns successfully" May 14 01:32:54.094973 systemd[1]: cri-containerd-f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040.scope: Deactivated successfully. May 14 01:32:54.098717 containerd[1483]: time="2025-05-14T01:32:54.098550321Z" level=info msg="received exit event container_id:\"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\" id:\"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\" pid:3382 exited_at:{seconds:1747186374 nanos:98135445}" May 14 01:32:54.099970 containerd[1483]: time="2025-05-14T01:32:54.099899523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\" id:\"f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040\" pid:3382 exited_at:{seconds:1747186374 nanos:98135445}" May 14 01:32:54.130292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040-rootfs.mount: Deactivated successfully. May 14 01:32:54.972155 containerd[1483]: time="2025-05-14T01:32:54.971254964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 01:32:55.022391 kubelet[2703]: I0514 01:32:55.021586 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7db5446874-zrr6j" podStartSLOduration=3.892416236 podStartE2EDuration="7.021380353s" podCreationTimestamp="2025-05-14 01:32:48 +0000 UTC" firstStartedPulling="2025-05-14 01:32:48.593111006 +0000 UTC m=+13.891296809" lastFinishedPulling="2025-05-14 01:32:51.722075133 +0000 UTC m=+17.020260926" observedRunningTime="2025-05-14 01:32:51.990701031 +0000 UTC m=+17.288886824" watchObservedRunningTime="2025-05-14 01:32:55.021380353 +0000 UTC m=+20.319566196" May 14 01:32:55.830871 kubelet[2703]: E0514 01:32:55.830235 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:56.965134 kubelet[2703]: I0514 01:32:56.964669 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:32:57.830214 kubelet[2703]: E0514 01:32:57.830161 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:32:59.829765 kubelet[2703]: E0514 01:32:59.829673 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:33:01.365289 containerd[1483]: time="2025-05-14T01:33:01.365124572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:01.367821 containerd[1483]: time="2025-05-14T01:33:01.366677163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 01:33:01.368597 containerd[1483]: time="2025-05-14T01:33:01.368537358Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:01.375047 containerd[1483]: time="2025-05-14T01:33:01.374970355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:01.375436 containerd[1483]: time="2025-05-14T01:33:01.375381898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.401887307s" May 14 01:33:01.375436 containerd[1483]: time="2025-05-14T01:33:01.375420827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 01:33:01.382824 containerd[1483]: time="2025-05-14T01:33:01.382757643Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 01:33:01.399983 containerd[1483]: time="2025-05-14T01:33:01.398167546Z" level=info msg="Container 508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1: CDI devices from CRI Config.CDIDevices: []" May 14 01:33:01.418547 containerd[1483]: time="2025-05-14T01:33:01.418409277Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\"" May 14 01:33:01.420167 containerd[1483]: time="2025-05-14T01:33:01.419323847Z" level=info msg="StartContainer for \"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\"" May 14 01:33:01.422619 containerd[1483]: time="2025-05-14T01:33:01.422512259Z" level=info msg="connecting to shim 508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1" address="unix:///run/containerd/s/04150baaf3b670a3752b057412bdbfb8ddf1b121c7e5977181c06ca34b6bdb6f" protocol=ttrpc version=3 May 14 01:33:01.451408 systemd[1]: Started cri-containerd-508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1.scope - libcontainer container 508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1. May 14 01:33:01.508193 containerd[1483]: time="2025-05-14T01:33:01.507933423Z" level=info msg="StartContainer for \"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\" returns successfully" May 14 01:33:01.830638 kubelet[2703]: E0514 01:33:01.830310 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:33:02.692037 systemd[1]: cri-containerd-508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1.scope: Deactivated successfully. May 14 01:33:02.692905 systemd[1]: cri-containerd-508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1.scope: Consumed 692ms CPU time, 175.7M memory peak, 154M written to disk. May 14 01:33:02.696228 containerd[1483]: time="2025-05-14T01:33:02.695708285Z" level=info msg="received exit event container_id:\"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\" id:\"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\" pid:3441 exited_at:{seconds:1747186382 nanos:694775550}" May 14 01:33:02.697046 containerd[1483]: time="2025-05-14T01:33:02.696941840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\" id:\"508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1\" pid:3441 exited_at:{seconds:1747186382 nanos:694775550}" May 14 01:33:02.729897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1-rootfs.mount: Deactivated successfully. May 14 01:33:02.760784 kubelet[2703]: I0514 01:33:02.760701 2703 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 01:33:03.143975 systemd[1]: Created slice kubepods-burstable-pod6543ad7a_eb37_4f39_80dc_d640e5e21906.slice - libcontainer container kubepods-burstable-pod6543ad7a_eb37_4f39_80dc_d640e5e21906.slice. May 14 01:33:03.171658 kubelet[2703]: I0514 01:33:03.171552 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bl9\" (UniqueName: \"kubernetes.io/projected/6543ad7a-eb37-4f39-80dc-d640e5e21906-kube-api-access-97bl9\") pod \"coredns-6f6b679f8f-ft6xv\" (UID: \"6543ad7a-eb37-4f39-80dc-d640e5e21906\") " pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:33:03.171658 kubelet[2703]: I0514 01:33:03.171611 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6543ad7a-eb37-4f39-80dc-d640e5e21906-config-volume\") pod \"coredns-6f6b679f8f-ft6xv\" (UID: \"6543ad7a-eb37-4f39-80dc-d640e5e21906\") " pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:33:03.408998 systemd[1]: Created slice kubepods-besteffort-pod8cc13bda_6d3e_4d99_9e7c_99c5a21f3e9d.slice - libcontainer container kubepods-besteffort-pod8cc13bda_6d3e_4d99_9e7c_99c5a21f3e9d.slice. May 14 01:33:03.438756 systemd[1]: Created slice kubepods-burstable-pod699fc6c2_1efe_46a9_8707_c8d12e1267b3.slice - libcontainer container kubepods-burstable-pod699fc6c2_1efe_46a9_8707_c8d12e1267b3.slice. May 14 01:33:03.450996 systemd[1]: Created slice kubepods-besteffort-pod75f313c4_ceb1_461e_90dc_dfe96fb4abb2.slice - libcontainer container kubepods-besteffort-pod75f313c4_ceb1_461e_90dc_dfe96fb4abb2.slice. May 14 01:33:03.462199 systemd[1]: Created slice kubepods-besteffort-pod9f48ef50_86f3_4b80_bf67_25a2deb3788b.slice - libcontainer container kubepods-besteffort-pod9f48ef50_86f3_4b80_bf67_25a2deb3788b.slice. May 14 01:33:03.474953 kubelet[2703]: I0514 01:33:03.474919 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml52g\" (UniqueName: \"kubernetes.io/projected/75f313c4-ceb1-461e-90dc-dfe96fb4abb2-kube-api-access-ml52g\") pod \"calico-apiserver-67c9fcd687-nm95d\" (UID: \"75f313c4-ceb1-461e-90dc-dfe96fb4abb2\") " pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:03.475136 kubelet[2703]: I0514 01:33:03.475105 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99xm\" (UniqueName: \"kubernetes.io/projected/699fc6c2-1efe-46a9-8707-c8d12e1267b3-kube-api-access-m99xm\") pod \"coredns-6f6b679f8f-6clwr\" (UID: \"699fc6c2-1efe-46a9-8707-c8d12e1267b3\") " pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:33:03.475385 kubelet[2703]: I0514 01:33:03.475279 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699fc6c2-1efe-46a9-8707-c8d12e1267b3-config-volume\") pod \"coredns-6f6b679f8f-6clwr\" (UID: \"699fc6c2-1efe-46a9-8707-c8d12e1267b3\") " pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:33:03.475385 kubelet[2703]: I0514 01:33:03.475334 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d-tigera-ca-bundle\") pod \"calico-kube-controllers-c8c97975-9bk6f\" (UID: \"8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d\") " pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" May 14 01:33:03.475574 kubelet[2703]: I0514 01:33:03.475365 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ldsx\" (UniqueName: \"kubernetes.io/projected/8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d-kube-api-access-5ldsx\") pod \"calico-kube-controllers-c8c97975-9bk6f\" (UID: \"8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d\") " pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" May 14 01:33:03.475574 kubelet[2703]: I0514 01:33:03.475538 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75f313c4-ceb1-461e-90dc-dfe96fb4abb2-calico-apiserver-certs\") pod \"calico-apiserver-67c9fcd687-nm95d\" (UID: \"75f313c4-ceb1-461e-90dc-dfe96fb4abb2\") " pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:03.475770 kubelet[2703]: I0514 01:33:03.475683 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f48ef50-86f3-4b80-bf67-25a2deb3788b-calico-apiserver-certs\") pod \"calico-apiserver-67c9fcd687-z2wlk\" (UID: \"9f48ef50-86f3-4b80-bf67-25a2deb3788b\") " pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:03.475770 kubelet[2703]: I0514 01:33:03.475715 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhhj\" (UniqueName: \"kubernetes.io/projected/9f48ef50-86f3-4b80-bf67-25a2deb3788b-kube-api-access-9xhhj\") pod \"calico-apiserver-67c9fcd687-z2wlk\" (UID: \"9f48ef50-86f3-4b80-bf67-25a2deb3788b\") " pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:03.720806 containerd[1483]: time="2025-05-14T01:33:03.720616542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8c97975-9bk6f,Uid:8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d,Namespace:calico-system,Attempt:0,}" May 14 01:33:03.760863 containerd[1483]: time="2025-05-14T01:33:03.760758107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ft6xv,Uid:6543ad7a-eb37-4f39-80dc-d640e5e21906,Namespace:kube-system,Attempt:0,}" May 14 01:33:03.768749 containerd[1483]: time="2025-05-14T01:33:03.760819671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6clwr,Uid:699fc6c2-1efe-46a9-8707-c8d12e1267b3,Namespace:kube-system,Attempt:0,}" May 14 01:33:03.769780 containerd[1483]: time="2025-05-14T01:33:03.769747617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-z2wlk,Uid:9f48ef50-86f3-4b80-bf67-25a2deb3788b,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:03.839638 systemd[1]: Created slice kubepods-besteffort-pod0c9eb405_0467_45ae_98a3_2e9aaf13c0d6.slice - libcontainer container kubepods-besteffort-pod0c9eb405_0467_45ae_98a3_2e9aaf13c0d6.slice. May 14 01:33:03.848590 containerd[1483]: time="2025-05-14T01:33:03.848397596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nj48f,Uid:0c9eb405-0467-45ae-98a3-2e9aaf13c0d6,Namespace:calico-system,Attempt:0,}" May 14 01:33:03.933842 containerd[1483]: time="2025-05-14T01:33:03.933783813Z" level=error msg="Failed to destroy network for sandbox \"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.939598 containerd[1483]: time="2025-05-14T01:33:03.939539601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ft6xv,Uid:6543ad7a-eb37-4f39-80dc-d640e5e21906,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.940794 kubelet[2703]: E0514 01:33:03.940178 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.940794 kubelet[2703]: E0514 01:33:03.940294 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:33:03.940794 kubelet[2703]: E0514 01:33:03.940367 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:33:03.940794 kubelet[2703]: E0514 01:33:03.940423 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ft6xv_kube-system(6543ad7a-eb37-4f39-80dc-d640e5e21906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ft6xv_kube-system(6543ad7a-eb37-4f39-80dc-d640e5e21906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"605ea0f26bda8d995e02f9dde55f60acb5300c644163a92c3d671470343cba95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ft6xv" podUID="6543ad7a-eb37-4f39-80dc-d640e5e21906" May 14 01:33:03.955244 containerd[1483]: time="2025-05-14T01:33:03.955177542Z" level=error msg="Failed to destroy network for sandbox \"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.957528 containerd[1483]: time="2025-05-14T01:33:03.957479072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8c97975-9bk6f,Uid:8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.958272 kubelet[2703]: E0514 01:33:03.957876 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.958272 kubelet[2703]: E0514 01:33:03.958054 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" May 14 01:33:03.958272 kubelet[2703]: E0514 01:33:03.958108 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" May 14 01:33:03.958272 kubelet[2703]: E0514 01:33:03.958201 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8c97975-9bk6f_calico-system(8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8c97975-9bk6f_calico-system(8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c7cb2e021a66eb25c0862e76f6ddcf9bbb73d1e534a8613f102088bc690a880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" podUID="8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d" May 14 01:33:03.984044 containerd[1483]: time="2025-05-14T01:33:03.983885629Z" level=error msg="Failed to destroy network for sandbox \"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.989313 containerd[1483]: time="2025-05-14T01:33:03.989265477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-z2wlk,Uid:9f48ef50-86f3-4b80-bf67-25a2deb3788b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.990743 containerd[1483]: time="2025-05-14T01:33:03.989994370Z" level=error msg="Failed to destroy network for sandbox \"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.990795 kubelet[2703]: E0514 01:33:03.989501 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:03.990795 kubelet[2703]: E0514 01:33:03.989563 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:03.990795 kubelet[2703]: E0514 01:33:03.989591 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:03.990795 kubelet[2703]: E0514 01:33:03.989646 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04c51ed940d2311021ce43300fd82ed1cd77c496d2f4b4f9cea36af0180e5de5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" podUID="9f48ef50-86f3-4b80-bf67-25a2deb3788b" May 14 01:33:04.003906 containerd[1483]: time="2025-05-14T01:33:04.003829616Z" level=error msg="Failed to destroy network for sandbox \"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.009949 containerd[1483]: time="2025-05-14T01:33:04.009795428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6clwr,Uid:699fc6c2-1efe-46a9-8707-c8d12e1267b3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.010728 kubelet[2703]: E0514 01:33:04.010216 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.010728 kubelet[2703]: E0514 01:33:04.010270 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:33:04.010728 kubelet[2703]: E0514 01:33:04.010399 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:33:04.010728 kubelet[2703]: E0514 01:33:04.010564 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"725c2f0870d07daf1b70224ade550e903f53ae4a3d8e3e3224d0b52e576d0966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6clwr" podUID="699fc6c2-1efe-46a9-8707-c8d12e1267b3" May 14 01:33:04.015054 containerd[1483]: time="2025-05-14T01:33:04.014938561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nj48f,Uid:0c9eb405-0467-45ae-98a3-2e9aaf13c0d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.015581 kubelet[2703]: E0514 01:33:04.015232 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.015581 kubelet[2703]: E0514 01:33:04.015391 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nj48f" May 14 01:33:04.015581 kubelet[2703]: E0514 01:33:04.015417 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nj48f" May 14 01:33:04.015581 kubelet[2703]: E0514 01:33:04.015490 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nj48f_calico-system(0c9eb405-0467-45ae-98a3-2e9aaf13c0d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nj48f_calico-system(0c9eb405-0467-45ae-98a3-2e9aaf13c0d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b096c3cd8091e440958d5a0364e1cadd2ff373bebf107cc03f73a2869cf57f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nj48f" podUID="0c9eb405-0467-45ae-98a3-2e9aaf13c0d6" May 14 01:33:04.015932 containerd[1483]: time="2025-05-14T01:33:04.015384812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 01:33:04.054752 containerd[1483]: time="2025-05-14T01:33:04.054711048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-nm95d,Uid:75f313c4-ceb1-461e-90dc-dfe96fb4abb2,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:04.111549 containerd[1483]: time="2025-05-14T01:33:04.111493084Z" level=error msg="Failed to destroy network for sandbox \"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.113128 containerd[1483]: time="2025-05-14T01:33:04.113078153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-nm95d,Uid:75f313c4-ceb1-461e-90dc-dfe96fb4abb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.113688 kubelet[2703]: E0514 01:33:04.113457 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:04.113688 kubelet[2703]: E0514 01:33:04.113551 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:04.113688 kubelet[2703]: E0514 01:33:04.113574 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:04.113688 kubelet[2703]: E0514 01:33:04.113640 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df953d0e5dd1f351e15f0f1a6506e012fe9bce071c9fbd2e6e72b127b43eb3aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" podUID="75f313c4-ceb1-461e-90dc-dfe96fb4abb2" May 14 01:33:04.738187 systemd[1]: run-netns-cni\x2de53907e4\x2d1ce6\x2de997\x2dd97a\x2d8fe1afaf8d53.mount: Deactivated successfully. May 14 01:33:04.738428 systemd[1]: run-netns-cni\x2db1b86674\x2d32c3\x2dc162\x2d5f02\x2d700c2e4492f4.mount: Deactivated successfully. May 14 01:33:09.862269 systemd-timesyncd[1391]: Contacted time server 104.234.61.117:123 (0.flatcar.pool.ntp.org). May 14 01:33:14.832480 containerd[1483]: time="2025-05-14T01:33:14.832255776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-nm95d,Uid:75f313c4-ceb1-461e-90dc-dfe96fb4abb2,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:14.837329 containerd[1483]: time="2025-05-14T01:33:14.836170056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-z2wlk,Uid:9f48ef50-86f3-4b80-bf67-25a2deb3788b,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:14.986462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384543547.mount: Deactivated successfully. May 14 01:33:15.104564 containerd[1483]: time="2025-05-14T01:33:15.104237270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:15.106361 containerd[1483]: time="2025-05-14T01:33:15.105976556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 01:33:15.130055 containerd[1483]: time="2025-05-14T01:33:15.126204566Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:15.170023 containerd[1483]: time="2025-05-14T01:33:15.169956810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:15.174427 containerd[1483]: time="2025-05-14T01:33:15.174376946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 11.15894782s" May 14 01:33:15.174567 containerd[1483]: time="2025-05-14T01:33:15.174439419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 01:33:15.217557 containerd[1483]: time="2025-05-14T01:33:15.217493901Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 01:33:15.252101 containerd[1483]: time="2025-05-14T01:33:15.251825734Z" level=info msg="Container 78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46: CDI devices from CRI Config.CDIDevices: []" May 14 01:33:15.275858 containerd[1483]: time="2025-05-14T01:33:15.275803698Z" level=info msg="CreateContainer within sandbox \"b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\"" May 14 01:33:15.278563 containerd[1483]: time="2025-05-14T01:33:15.278438544Z" level=info msg="StartContainer for \"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\"" May 14 01:33:15.283126 containerd[1483]: time="2025-05-14T01:33:15.282268572Z" level=error msg="Failed to destroy network for sandbox \"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.284636 containerd[1483]: time="2025-05-14T01:33:15.284574874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-nm95d,Uid:75f313c4-ceb1-461e-90dc-dfe96fb4abb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.285015 containerd[1483]: time="2025-05-14T01:33:15.284597986Z" level=info msg="connecting to shim 78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46" address="unix:///run/containerd/s/04150baaf3b670a3752b057412bdbfb8ddf1b121c7e5977181c06ca34b6bdb6f" protocol=ttrpc version=3 May 14 01:33:15.286647 kubelet[2703]: E0514 01:33:15.286578 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.287730 kubelet[2703]: E0514 01:33:15.286701 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:15.287730 kubelet[2703]: E0514 01:33:15.286746 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:33:15.287730 kubelet[2703]: E0514 01:33:15.286833 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f2d143fc18d0597ea265fcd2937322da157d090af37568e8646771e346f975b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" podUID="75f313c4-ceb1-461e-90dc-dfe96fb4abb2" May 14 01:33:15.301207 containerd[1483]: time="2025-05-14T01:33:15.301121576Z" level=error msg="Failed to destroy network for sandbox \"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.302760 containerd[1483]: time="2025-05-14T01:33:15.302679925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-z2wlk,Uid:9f48ef50-86f3-4b80-bf67-25a2deb3788b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.303046 kubelet[2703]: E0514 01:33:15.302974 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:33:15.303144 kubelet[2703]: E0514 01:33:15.303043 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:15.303144 kubelet[2703]: E0514 01:33:15.303117 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:33:15.303227 kubelet[2703]: E0514 01:33:15.303192 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba93507d2738f5a32d60c03496d8a66477e66fc8676557c79179742e792a06c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" podUID="9f48ef50-86f3-4b80-bf67-25a2deb3788b" May 14 01:33:15.331525 systemd[1]: Started cri-containerd-78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46.scope - libcontainer container 78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46. May 14 01:33:15.394775 containerd[1483]: time="2025-05-14T01:33:15.394731789Z" level=info msg="StartContainer for \"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" returns successfully" May 14 01:33:15.473671 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 01:33:15.473855 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 01:33:15.830586 containerd[1483]: time="2025-05-14T01:33:15.830209089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8c97975-9bk6f,Uid:8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d,Namespace:calico-system,Attempt:0,}" May 14 01:33:15.980503 systemd-networkd[1387]: cali796fcecd394: Link UP May 14 01:33:15.980782 systemd-networkd[1387]: cali796fcecd394: Gained carrier May 14 01:33:15.989380 systemd[1]: run-netns-cni\x2d9a4d2be4\x2d2bad\x2d2875\x2dccf3\x2d854a1553e868.mount: Deactivated successfully. May 14 01:33:15.989521 systemd[1]: run-netns-cni\x2d4d6b4325\x2d4ace\x2da4e9\x2d70bb\x2d41583fb7b6da.mount: Deactivated successfully. May 14 01:33:16.011026 containerd[1483]: 2025-05-14 01:33:15.861 [INFO][3789] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:33:16.011026 containerd[1483]: 2025-05-14 01:33:15.877 [INFO][3789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0 calico-kube-controllers-c8c97975- calico-system 8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d 718 0 2025-05-14 01:32:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8c97975 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284-0-0-n-af44d751a9.novalocal calico-kube-controllers-c8c97975-9bk6f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali796fcecd394 [] []}} ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-" May 14 01:33:16.011026 containerd[1483]: 2025-05-14 01:33:15.877 [INFO][3789] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.011026 containerd[1483]: 2025-05-14 01:33:15.913 [INFO][3800] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" HandleID="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.925 [INFO][3800] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" HandleID="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-af44d751a9.novalocal", "pod":"calico-kube-controllers-c8c97975-9bk6f", "timestamp":"2025-05-14 01:33:15.913908374 +0000 UTC"}, Hostname:"ci-4284-0-0-n-af44d751a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.925 [INFO][3800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.925 [INFO][3800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.925 [INFO][3800] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-af44d751a9.novalocal' May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.928 [INFO][3800] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.933 [INFO][3800] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.938 [INFO][3800] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.941 [INFO][3800] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.011698 containerd[1483]: 2025-05-14 01:33:15.944 [INFO][3800] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.944 [INFO][3800] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.946 [INFO][3800] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.952 [INFO][3800] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.963 [INFO][3800] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.963 [INFO][3800] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.963 [INFO][3800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:33:16.012012 containerd[1483]: 2025-05-14 01:33:15.964 [INFO][3800] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" HandleID="k8s-pod-network.b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.012354 containerd[1483]: 2025-05-14 01:33:15.969 [INFO][3789] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0", GenerateName:"calico-kube-controllers-c8c97975-", Namespace:"calico-system", SelfLink:"", UID:"8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8c97975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"", Pod:"calico-kube-controllers-c8c97975-9bk6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali796fcecd394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:16.012503 containerd[1483]: 2025-05-14 01:33:15.969 [INFO][3789] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.1/32] ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.012503 containerd[1483]: 2025-05-14 01:33:15.969 [INFO][3789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali796fcecd394 ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.012503 containerd[1483]: 2025-05-14 01:33:15.980 [INFO][3789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.012656 containerd[1483]: 2025-05-14 01:33:15.981 [INFO][3789] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0", GenerateName:"calico-kube-controllers-c8c97975-", Namespace:"calico-system", SelfLink:"", UID:"8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8c97975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d", Pod:"calico-kube-controllers-c8c97975-9bk6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali796fcecd394", MAC:"ca:78:a0:99:68:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:16.012731 containerd[1483]: 2025-05-14 01:33:16.006 [INFO][3789] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" Namespace="calico-system" Pod="calico-kube-controllers-c8c97975-9bk6f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-calico--kube--controllers--c8c97975--9bk6f-eth0" May 14 01:33:16.047248 containerd[1483]: time="2025-05-14T01:33:16.047171601Z" level=info msg="connecting to shim b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d" address="unix:///run/containerd/s/c9acfebfcfea06044a899a8c90ca9547952386b59abf00caceb37fb842e20942" namespace=k8s.io protocol=ttrpc version=3 May 14 01:33:16.102117 kubelet[2703]: I0514 01:33:16.100877 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2kmgw" podStartSLOduration=1.605589363 podStartE2EDuration="28.100836038s" podCreationTimestamp="2025-05-14 01:32:48 +0000 UTC" firstStartedPulling="2025-05-14 01:32:48.685519038 +0000 UTC m=+13.983704841" lastFinishedPulling="2025-05-14 01:33:15.180765713 +0000 UTC m=+40.478951516" observedRunningTime="2025-05-14 01:33:16.100708357 +0000 UTC m=+41.398894160" watchObservedRunningTime="2025-05-14 01:33:16.100836038 +0000 UTC m=+41.399021831" May 14 01:33:16.101148 systemd[1]: Started cri-containerd-b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d.scope - libcontainer container b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d. May 14 01:33:16.187127 containerd[1483]: time="2025-05-14T01:33:16.186978532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8c97975-9bk6f,Uid:8cc13bda-6d3e-4d99-9e7c-99c5a21f3e9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d\"" May 14 01:33:16.192499 containerd[1483]: time="2025-05-14T01:33:16.192103208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 01:33:16.213425 containerd[1483]: time="2025-05-14T01:33:16.213383711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"c341c2432e1f135d46503a3e0678cab13ef68642d6476ebde41849ba4ccce9a8\" pid:3869 exit_status:1 exited_at:{seconds:1747186396 nanos:211795830}" May 14 01:33:16.834112 containerd[1483]: time="2025-05-14T01:33:16.833414112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nj48f,Uid:0c9eb405-0467-45ae-98a3-2e9aaf13c0d6,Namespace:calico-system,Attempt:0,}" May 14 01:33:16.835125 containerd[1483]: time="2025-05-14T01:33:16.834675470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6clwr,Uid:699fc6c2-1efe-46a9-8707-c8d12e1267b3,Namespace:kube-system,Attempt:0,}" May 14 01:33:16.999258 systemd-networkd[1387]: cali796fcecd394: Gained IPv6LL May 14 01:33:17.258010 systemd-networkd[1387]: cali6b9d7fa5a2b: Link UP May 14 01:33:17.258498 systemd-networkd[1387]: cali6b9d7fa5a2b: Gained carrier May 14 01:33:17.277303 containerd[1483]: 2025-05-14 01:33:16.971 [INFO][3901] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:33:17.277303 containerd[1483]: 2025-05-14 01:33:17.005 [INFO][3901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0 csi-node-driver- calico-system 0c9eb405-0467-45ae-98a3-2e9aaf13c0d6 620 0 2025-05-14 01:32:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284-0-0-n-af44d751a9.novalocal csi-node-driver-nj48f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b9d7fa5a2b [] []}} ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-" May 14 01:33:17.277303 containerd[1483]: 2025-05-14 01:33:17.005 [INFO][3901] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.277303 containerd[1483]: 2025-05-14 01:33:17.145 [INFO][3973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" HandleID="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.181 [INFO][3973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" HandleID="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027aee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-af44d751a9.novalocal", "pod":"csi-node-driver-nj48f", "timestamp":"2025-05-14 01:33:17.145423824 +0000 UTC"}, Hostname:"ci-4284-0-0-n-af44d751a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.181 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.181 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.181 [INFO][3973] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-af44d751a9.novalocal' May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.186 [INFO][3973] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.203 [INFO][3973] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.216 [INFO][3973] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.222 [INFO][3973] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281137 containerd[1483]: 2025-05-14 01:33:17.227 [INFO][3973] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.227 [INFO][3973] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.233 [INFO][3973] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928 May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.239 [INFO][3973] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.250 [INFO][3973] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.250 [INFO][3973] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.250 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:33:17.281440 containerd[1483]: 2025-05-14 01:33:17.250 [INFO][3973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" HandleID="k8s-pod-network.0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.281626 containerd[1483]: 2025-05-14 01:33:17.255 [INFO][3901] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6", ResourceVersion:"620", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"", Pod:"csi-node-driver-nj48f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b9d7fa5a2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:17.281696 containerd[1483]: 2025-05-14 01:33:17.255 [INFO][3901] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.2/32] ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.281696 containerd[1483]: 2025-05-14 01:33:17.255 [INFO][3901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b9d7fa5a2b ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.281696 containerd[1483]: 2025-05-14 01:33:17.258 [INFO][3901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.281767 containerd[1483]: 2025-05-14 01:33:17.258 [INFO][3901] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c9eb405-0467-45ae-98a3-2e9aaf13c0d6", ResourceVersion:"620", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928", Pod:"csi-node-driver-nj48f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b9d7fa5a2b", MAC:"ee:d2:05:c5:88:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:17.281839 containerd[1483]: 2025-05-14 01:33:17.275 [INFO][3901] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" Namespace="calico-system" Pod="csi-node-driver-nj48f" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-csi--node--driver--nj48f-eth0" May 14 01:33:17.354182 containerd[1483]: time="2025-05-14T01:33:17.353514750Z" level=info msg="connecting to shim 0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928" address="unix:///run/containerd/s/ff2a0866dac88c249f6f9adad3b01679aa79d82179648c46783b53abe4fc7797" namespace=k8s.io protocol=ttrpc version=3 May 14 01:33:17.398347 systemd-networkd[1387]: cali08f671f023b: Link UP May 14 01:33:17.399445 systemd-networkd[1387]: cali08f671f023b: Gained carrier May 14 01:33:17.431038 containerd[1483]: 2025-05-14 01:33:16.995 [INFO][3910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:33:17.431038 containerd[1483]: 2025-05-14 01:33:17.019 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0 coredns-6f6b679f8f- kube-system 699fc6c2-1efe-46a9-8707-c8d12e1267b3 721 0 2025-05-14 01:32:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-n-af44d751a9.novalocal coredns-6f6b679f8f-6clwr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08f671f023b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-" May 14 01:33:17.431038 containerd[1483]: 2025-05-14 01:33:17.019 [INFO][3910] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431038 containerd[1483]: 2025-05-14 01:33:17.206 [INFO][3981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" HandleID="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.229 [INFO][3981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" HandleID="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002658e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-n-af44d751a9.novalocal", "pod":"coredns-6f6b679f8f-6clwr", "timestamp":"2025-05-14 01:33:17.206169597 +0000 UTC"}, Hostname:"ci-4284-0-0-n-af44d751a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.231 [INFO][3981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.251 [INFO][3981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.252 [INFO][3981] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-af44d751a9.novalocal' May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.292 [INFO][3981] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.306 [INFO][3981] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.315 [INFO][3981] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.319 [INFO][3981] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431345 containerd[1483]: 2025-05-14 01:33:17.323 [INFO][3981] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.323 [INFO][3981] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.327 [INFO][3981] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888 May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.338 [INFO][3981] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.368 [INFO][3981] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.370 [INFO][3981] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" host="ci-4284-0-0-n-af44d751a9.novalocal" May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.370 [INFO][3981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:33:17.431629 containerd[1483]: 2025-05-14 01:33:17.370 [INFO][3981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" HandleID="k8s-pod-network.3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Workload="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.376 [INFO][3910] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"699fc6c2-1efe-46a9-8707-c8d12e1267b3", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-6clwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08f671f023b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.377 [INFO][3910] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.3/32] ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.378 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08f671f023b ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.399 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.404 [INFO][3910] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"699fc6c2-1efe-46a9-8707-c8d12e1267b3", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-af44d751a9.novalocal", ContainerID:"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888", Pod:"coredns-6f6b679f8f-6clwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08f671f023b", MAC:"f2:fb:78:60:44:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:33:17.431870 containerd[1483]: 2025-05-14 01:33:17.427 [INFO][3910] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888" Namespace="kube-system" Pod="coredns-6f6b679f8f-6clwr" WorkloadEndpoint="ci--4284--0--0--n--af44d751a9.novalocal-k8s-coredns--6f6b679f8f--6clwr-eth0" May 14 01:33:17.463737 systemd[1]: Started cri-containerd-0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928.scope - libcontainer container 0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928. May 14 01:33:17.557639 containerd[1483]: time="2025-05-14T01:33:17.557562483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nj48f,Uid:0c9eb405-0467-45ae-98a3-2e9aaf13c0d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928\"" May 14 01:33:17.724353 containerd[1483]: time="2025-05-14T01:33:17.724297274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"8098ea80a5978ecf116135b4682d40fe9cd5e614c30ebb8daaaf6f6c1f7f7164\" pid:4010 exit_status:1 exited_at:{seconds:1747186397 nanos:722912806}" May 14 01:33:17.766117 kernel: bpftool[4134]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 01:33:18.042492 systemd-networkd[1387]: vxlan.calico: Link UP May 14 01:33:18.042504 systemd-networkd[1387]: vxlan.calico: Gained carrier May 14 01:33:18.473137 systemd-networkd[1387]: cali08f671f023b: Gained IPv6LL May 14 01:33:18.830959 containerd[1483]: time="2025-05-14T01:33:18.830449545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ft6xv,Uid:6543ad7a-eb37-4f39-80dc-d640e5e21906,Namespace:kube-system,Attempt:0,}" May 14 01:33:19.239326 systemd-networkd[1387]: cali6b9d7fa5a2b: Gained IPv6LL May 14 01:33:19.816216 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL May 14 01:33:21.089547 containerd[1483]: time="2025-05-14T01:33:21.089353550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:21.091822 containerd[1483]: time="2025-05-14T01:33:21.091726690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 01:33:21.093096 containerd[1483]: time="2025-05-14T01:33:21.092950879Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:21.096078 containerd[1483]: time="2025-05-14T01:33:21.095807297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:21.096557 containerd[1483]: time="2025-05-14T01:33:21.096498668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.904354606s" May 14 01:33:21.096632 containerd[1483]: time="2025-05-14T01:33:21.096557625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 01:33:21.098807 containerd[1483]: time="2025-05-14T01:33:21.098574301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 01:33:21.139137 containerd[1483]: time="2025-05-14T01:33:21.139093913Z" level=info msg="CreateContainer within sandbox \"b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 01:33:21.155094 containerd[1483]: time="2025-05-14T01:33:21.152219879Z" level=info msg="Container 6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613: CDI devices from CRI Config.CDIDevices: []" May 14 01:33:21.170700 containerd[1483]: time="2025-05-14T01:33:21.170558480Z" level=info msg="CreateContainer within sandbox \"b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\"" May 14 01:33:21.171981 containerd[1483]: time="2025-05-14T01:33:21.171926132Z" level=info msg="StartContainer for \"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\"" May 14 01:33:21.175028 containerd[1483]: time="2025-05-14T01:33:21.174945570Z" level=info msg="connecting to shim 6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613" address="unix:///run/containerd/s/c9acfebfcfea06044a899a8c90ca9547952386b59abf00caceb37fb842e20942" protocol=ttrpc version=3 May 14 01:33:21.249325 systemd[1]: Started cri-containerd-6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613.scope - libcontainer container 6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613. May 14 01:33:21.324050 containerd[1483]: time="2025-05-14T01:33:21.323994012Z" level=info msg="StartContainer for \"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" returns successfully" May 14 01:33:22.143354 kubelet[2703]: I0514 01:33:22.142700 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c8c97975-9bk6f" podStartSLOduration=29.235687834 podStartE2EDuration="34.142559529s" podCreationTimestamp="2025-05-14 01:32:48 +0000 UTC" firstStartedPulling="2025-05-14 01:33:16.190679826 +0000 UTC m=+41.488865649" lastFinishedPulling="2025-05-14 01:33:21.097551551 +0000 UTC m=+46.395737344" observedRunningTime="2025-05-14 01:33:22.136758647 +0000 UTC m=+47.434944500" watchObservedRunningTime="2025-05-14 01:33:22.142559529 +0000 UTC m=+47.440745372" May 14 01:33:23.107712 kubelet[2703]: I0514 01:33:23.107618 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:33:23.470838 containerd[1483]: time="2025-05-14T01:33:23.470691759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:23.472837 containerd[1483]: time="2025-05-14T01:33:23.472039495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 01:33:23.473968 containerd[1483]: time="2025-05-14T01:33:23.473920187Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:23.478404 containerd[1483]: time="2025-05-14T01:33:23.478375910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:23.480640 containerd[1483]: time="2025-05-14T01:33:23.480604092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.381969851s" May 14 01:33:23.480742 containerd[1483]: time="2025-05-14T01:33:23.480671987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 01:33:23.484821 containerd[1483]: time="2025-05-14T01:33:23.484771412Z" level=info msg="CreateContainer within sandbox \"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 01:33:23.509102 containerd[1483]: time="2025-05-14T01:33:23.507310107Z" level=info msg="Container 5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179: CDI devices from CRI Config.CDIDevices: []" May 14 01:33:23.527053 containerd[1483]: time="2025-05-14T01:33:23.526971272Z" level=info msg="CreateContainer within sandbox \"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179\"" May 14 01:33:23.528374 containerd[1483]: time="2025-05-14T01:33:23.528306416Z" level=info msg="StartContainer for \"5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179\"" May 14 01:33:23.531295 containerd[1483]: time="2025-05-14T01:33:23.531252833Z" level=info msg="connecting to shim 5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179" address="unix:///run/containerd/s/ff2a0866dac88c249f6f9adad3b01679aa79d82179648c46783b53abe4fc7797" protocol=ttrpc version=3 May 14 01:33:23.568237 systemd[1]: Started cri-containerd-5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179.scope - libcontainer container 5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179. May 14 01:33:23.635182 containerd[1483]: time="2025-05-14T01:33:23.635082170Z" level=info msg="StartContainer for \"5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179\" returns successfully" May 14 01:33:23.637238 containerd[1483]: time="2025-05-14T01:33:23.637029194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 01:33:25.831821 containerd[1483]: time="2025-05-14T01:33:25.831688349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-nm95d,Uid:75f313c4-ceb1-461e-90dc-dfe96fb4abb2,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:26.786988 containerd[1483]: time="2025-05-14T01:33:26.786867037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:26.833354 containerd[1483]: time="2025-05-14T01:33:26.833155555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 01:33:26.843854 containerd[1483]: time="2025-05-14T01:33:26.842985739Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:26.863355 containerd[1483]: time="2025-05-14T01:33:26.862484338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:33:26.867395 containerd[1483]: time="2025-05-14T01:33:26.867288059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.230162155s" May 14 01:33:26.868033 containerd[1483]: time="2025-05-14T01:33:26.867397071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 01:33:26.873448 containerd[1483]: time="2025-05-14T01:33:26.872920648Z" level=info msg="CreateContainer within sandbox \"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 01:33:26.894553 containerd[1483]: time="2025-05-14T01:33:26.894047211Z" level=info msg="Container 2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86: CDI devices from CRI Config.CDIDevices: []" May 14 01:33:26.934997 containerd[1483]: time="2025-05-14T01:33:26.934889844Z" level=info msg="CreateContainer within sandbox \"0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86\"" May 14 01:33:26.939163 containerd[1483]: time="2025-05-14T01:33:26.939001811Z" level=info msg="StartContainer for \"2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86\"" May 14 01:33:26.950403 containerd[1483]: time="2025-05-14T01:33:26.950303237Z" level=info msg="connecting to shim 2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86" address="unix:///run/containerd/s/ff2a0866dac88c249f6f9adad3b01679aa79d82179648c46783b53abe4fc7797" protocol=ttrpc version=3 May 14 01:33:27.013308 systemd[1]: Started cri-containerd-2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86.scope - libcontainer container 2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86. May 14 01:33:27.066780 containerd[1483]: time="2025-05-14T01:33:27.066462439Z" level=info msg="StartContainer for \"2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86\" returns successfully" May 14 01:33:27.154399 kubelet[2703]: I0514 01:33:27.152858 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nj48f" podStartSLOduration=29.843545327 podStartE2EDuration="39.152778627s" podCreationTimestamp="2025-05-14 01:32:48 +0000 UTC" firstStartedPulling="2025-05-14 01:33:17.560130734 +0000 UTC m=+42.858316527" lastFinishedPulling="2025-05-14 01:33:26.869363983 +0000 UTC m=+52.167549827" observedRunningTime="2025-05-14 01:33:27.151400914 +0000 UTC m=+52.449586716" watchObservedRunningTime="2025-05-14 01:33:27.152778627 +0000 UTC m=+52.450964470" May 14 01:33:27.969360 kubelet[2703]: I0514 01:33:27.969205 2703 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 01:33:27.970410 kubelet[2703]: I0514 01:33:27.970012 2703 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 01:33:28.836492 containerd[1483]: time="2025-05-14T01:33:28.831957625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c9fcd687-z2wlk,Uid:9f48ef50-86f3-4b80-bf67-25a2deb3788b,Namespace:calico-apiserver,Attempt:0,}" May 14 01:33:29.517929 kubelet[2703]: I0514 01:33:29.517645 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:33:29.663942 containerd[1483]: time="2025-05-14T01:33:29.663889948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"fe77a75559c335b2a65fdb67031f6acd7bbe4ab3844184941f615a47fba6c30d\" pid:4360 exited_at:{seconds:1747186409 nanos:663418278}" May 14 01:33:29.669279 containerd[1483]: time="2025-05-14T01:33:29.669229399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"750c4de4cc5fb630dee9abdb546c33127081b0467c21489901123fc4a16ab515\" pid:4348 exited_at:{seconds:1747186409 nanos:668454212}" May 14 01:33:29.764360 containerd[1483]: time="2025-05-14T01:33:29.764001160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"a36a46860e53213471f9c5e2b4604bfe845c71eff48d47853140475727c12561\" pid:4384 exited_at:{seconds:1747186409 nanos:763791779}" May 14 01:33:45.832153 kubelet[2703]: E0514 01:33:45.831712 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:45.932566 kubelet[2703]: E0514 01:33:45.932403 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:46.132867 kubelet[2703]: E0514 01:33:46.132794 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:46.533240 kubelet[2703]: E0514 01:33:46.532999 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:46.676187 kubelet[2703]: I0514 01:33:46.675604 2703 setters.go:600] "Node became not ready" node="ci-4284-0-0-n-af44d751a9.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T01:33:46Z","lastTransitionTime":"2025-05-14T01:33:46Z","reason":"KubeletNotReady","message":"container runtime is down"} May 14 01:33:47.334275 kubelet[2703]: E0514 01:33:47.334191 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:48.936335 kubelet[2703]: E0514 01:33:48.936258 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:52.137032 kubelet[2703]: E0514 01:33:52.136912 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:56.206364 containerd[1483]: time="2025-05-14T01:33:56.206016753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"f351ddc9f4150817c65d009bc85a96a38ecd6e7f9583a44d56663351507bc6ff\" pid:4421 exited_at:{seconds:1747186436 nanos:204722498}" May 14 01:33:57.138244 kubelet[2703]: E0514 01:33:57.138042 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:33:59.501111 containerd[1483]: time="2025-05-14T01:33:59.500548672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"8e1c968325574a027db922c31f74a23c2cf02cf1cf883993fcfa84e13300c9e6\" pid:4451 exited_at:{seconds:1747186439 nanos:500002246}" May 14 01:33:59.551376 containerd[1483]: time="2025-05-14T01:33:59.551315407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"2b1a16c62147baafff052ec641f6fa2d1be7426336e1295a70b7d8d2bac6a3f4\" pid:4470 exited_at:{seconds:1747186439 nanos:550883892}" May 14 01:34:02.139465 kubelet[2703]: E0514 01:34:02.139345 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:07.140561 kubelet[2703]: E0514 01:34:07.139724 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:12.141284 kubelet[2703]: E0514 01:34:12.141160 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:17.143162 kubelet[2703]: E0514 01:34:17.141556 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:22.143374 kubelet[2703]: E0514 01:34:22.142830 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:27.144128 kubelet[2703]: E0514 01:34:27.143965 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:29.502993 containerd[1483]: time="2025-05-14T01:34:29.502769006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"7ea4e02d9e7286f6289ac5ea96a4c87c7c3a0673e1ffe7feefbdb73e77878502\" pid:4501 exited_at:{seconds:1747186469 nanos:501672190}" May 14 01:34:29.583206 containerd[1483]: time="2025-05-14T01:34:29.583010534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"ca77356d8a9fd8e374785086e0487de4138e132d8a91023f11f4582f7c5ca42f\" pid:4516 exited_at:{seconds:1747186469 nanos:582423295}" May 14 01:34:32.144922 kubelet[2703]: E0514 01:34:32.144623 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:37.145927 kubelet[2703]: E0514 01:34:37.145779 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:42.146949 kubelet[2703]: E0514 01:34:42.146800 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:47.147492 kubelet[2703]: E0514 01:34:47.147375 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:52.149015 kubelet[2703]: E0514 01:34:52.148473 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:56.155445 containerd[1483]: time="2025-05-14T01:34:56.155310367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"1df11fd5ba821153aae61c580444ad69b8868d60c8cdc6ec3cbb2f1ac4c6bbf4\" pid:4569 exited_at:{seconds:1747186496 nanos:154663496}" May 14 01:34:57.150300 kubelet[2703]: E0514 01:34:57.149910 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:34:59.494284 containerd[1483]: time="2025-05-14T01:34:59.494231604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"07059a6d17ddb56e1e1ea5dae9818ead72212a65075a492534f92d73a6478026\" pid:4590 exited_at:{seconds:1747186499 nanos:493824504}" May 14 01:34:59.568474 containerd[1483]: time="2025-05-14T01:34:59.568135739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"c76277902963ba66c2b129f475a5b9b6fe46058888c52ec7e357fbfd74ac8f17\" pid:4609 exited_at:{seconds:1747186499 nanos:567605737}" May 14 01:35:02.151648 kubelet[2703]: E0514 01:35:02.151457 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:07.151971 kubelet[2703]: E0514 01:35:07.151779 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:12.153268 kubelet[2703]: E0514 01:35:12.152596 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:17.153540 kubelet[2703]: E0514 01:35:17.153323 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:19.937106 kubelet[2703]: E0514 01:35:19.936573 2703 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:35:19.937106 kubelet[2703]: E0514 01:35:19.936793 2703 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:35:22.154890 kubelet[2703]: E0514 01:35:22.154541 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:27.156397 kubelet[2703]: E0514 01:35:27.155770 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:29.516978 containerd[1483]: time="2025-05-14T01:35:29.515842509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"53424871039f41173f1d4e1875ba6dca5571ab01ef6bc1c2dcd0538df303993d\" pid:4643 exited_at:{seconds:1747186529 nanos:514086958}" May 14 01:35:29.568093 containerd[1483]: time="2025-05-14T01:35:29.567532943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"0aded0d5a49e0f79c47ca5baac39607f7c23c9b2de500758c9efa59f89427b1b\" pid:4656 exited_at:{seconds:1747186529 nanos:566775253}" May 14 01:35:32.156729 kubelet[2703]: E0514 01:35:32.156625 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:37.157293 kubelet[2703]: E0514 01:35:37.157026 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:42.158316 kubelet[2703]: E0514 01:35:42.158186 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:47.159446 kubelet[2703]: E0514 01:35:47.159334 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:52.160830 kubelet[2703]: E0514 01:35:52.160268 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:56.232864 containerd[1483]: time="2025-05-14T01:35:56.232303957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"c04edb23947767e868399498f8ad6f6ebd77dff9dda7a2d096cf5c89c908a36b\" pid:4687 exited_at:{seconds:1747186556 nanos:230591195}" May 14 01:35:57.161641 kubelet[2703]: E0514 01:35:57.161513 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:35:59.488348 containerd[1483]: time="2025-05-14T01:35:59.487873427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"4f8268e1d86da3c272f8409bb80cd07eb4a8262c5dcd506cf4820075e037dd3e\" pid:4718 exited_at:{seconds:1747186559 nanos:487642802}" May 14 01:35:59.563957 containerd[1483]: time="2025-05-14T01:35:59.563905234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"3b2a74dd70b0e6375d4ad449dd95eb2ee44afa618ce55c49ee51d9ea3330518e\" pid:4741 exited_at:{seconds:1747186559 nanos:563501489}" May 14 01:36:02.162602 kubelet[2703]: E0514 01:36:02.162371 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:07.162740 kubelet[2703]: E0514 01:36:07.162632 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:12.163295 kubelet[2703]: E0514 01:36:12.163166 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:17.164723 kubelet[2703]: E0514 01:36:17.164463 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:22.165407 kubelet[2703]: E0514 01:36:22.165264 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:27.166665 kubelet[2703]: E0514 01:36:27.166546 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:29.592148 containerd[1483]: time="2025-05-14T01:36:29.591615865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"73408fb8837da3402106b833f1fa07ddd85bd7e18eb5a5223feffabd1df9cf3f\" pid:4792 exited_at:{seconds:1747186589 nanos:590768830}" May 14 01:36:29.658617 containerd[1483]: time="2025-05-14T01:36:29.657615736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"0015177d29a64a6cc8aa2af467e83fd467f1176c88ff49955d8302ca04b92ca4\" pid:4790 exited_at:{seconds:1747186589 nanos:656309515}" May 14 01:36:32.167638 kubelet[2703]: E0514 01:36:32.167507 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:37.171211 kubelet[2703]: E0514 01:36:37.170509 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:42.171683 kubelet[2703]: E0514 01:36:42.171525 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:47.172050 kubelet[2703]: E0514 01:36:47.171986 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:52.172673 kubelet[2703]: E0514 01:36:52.172582 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:56.223553 containerd[1483]: time="2025-05-14T01:36:56.223380483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"233fa34ef475c2c988bc8e0c17e6656408052f3e39b1abcac08d2a4a43e7d9c2\" pid:4836 exited_at:{seconds:1747186616 nanos:222460961}" May 14 01:36:57.173577 kubelet[2703]: E0514 01:36:57.173460 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:36:59.493143 containerd[1483]: time="2025-05-14T01:36:59.492924993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"a6a154d9aa2d11e2f2f4c2f91f1785d6a9c9220a4b30208cb4b4c8fc1cc8493c\" pid:4858 exited_at:{seconds:1747186619 nanos:492230037}" May 14 01:36:59.565478 containerd[1483]: time="2025-05-14T01:36:59.565273359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"552702ad5dfb09bc95c16901e16b18290173048720d2114a60776be877bc5b41\" pid:4878 exited_at:{seconds:1747186619 nanos:564759749}" May 14 01:37:02.173762 kubelet[2703]: E0514 01:37:02.173654 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:07.174582 kubelet[2703]: E0514 01:37:07.174456 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:08.167730 systemd[1]: Started sshd@9-172.24.4.47:22-172.24.4.1:41074.service - OpenSSH per-connection server daemon (172.24.4.1:41074). May 14 01:37:09.402167 sshd[4897]: Accepted publickey for core from 172.24.4.1 port 41074 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:09.407036 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:09.426243 systemd-logind[1457]: New session 12 of user core. May 14 01:37:09.435437 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 01:37:10.150898 sshd[4899]: Connection closed by 172.24.4.1 port 41074 May 14 01:37:10.150595 sshd-session[4897]: pam_unix(sshd:session): session closed for user core May 14 01:37:10.157913 systemd[1]: sshd@9-172.24.4.47:22-172.24.4.1:41074.service: Deactivated successfully. May 14 01:37:10.165971 systemd[1]: session-12.scope: Deactivated successfully. May 14 01:37:10.171115 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. May 14 01:37:10.174138 systemd-logind[1457]: Removed session 12. May 14 01:37:12.175578 kubelet[2703]: E0514 01:37:12.175352 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:15.189503 systemd[1]: Started sshd@10-172.24.4.47:22-172.24.4.1:51460.service - OpenSSH per-connection server daemon (172.24.4.1:51460). May 14 01:37:16.727277 sshd[4916]: Accepted publickey for core from 172.24.4.1 port 51460 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:16.734650 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:16.758335 systemd-logind[1457]: New session 13 of user core. May 14 01:37:16.776234 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 01:37:16.835980 kubelet[2703]: E0514 01:37:16.834244 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:16.835980 kubelet[2703]: E0514 01:37:16.834628 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:37:16.835980 kubelet[2703]: E0514 01:37:16.834790 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:37:16.856375 kubelet[2703]: E0514 01:37:16.855981 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-6f6b679f8f-6clwr" podUID="699fc6c2-1efe-46a9-8707-c8d12e1267b3" May 14 01:37:17.175752 kubelet[2703]: E0514 01:37:17.175631 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:17.316726 containerd[1483]: time="2025-05-14T01:37:17.316576100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6clwr,Uid:699fc6c2-1efe-46a9-8707-c8d12e1267b3,Namespace:kube-system,Attempt:0,}" May 14 01:37:17.317411 containerd[1483]: time="2025-05-14T01:37:17.316882999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6clwr,Uid:699fc6c2-1efe-46a9-8707-c8d12e1267b3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\": name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\" is reserved for \"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888\"" May 14 01:37:17.317510 kubelet[2703]: E0514 01:37:17.317195 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\": name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\" is reserved for \"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888\"" May 14 01:37:17.317510 kubelet[2703]: E0514 01:37:17.317275 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\": name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\" is reserved for \"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888\"" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:37:17.317510 kubelet[2703]: E0514 01:37:17.317305 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\": name \"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\" is reserved for \"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888\"" pod="kube-system/coredns-6f6b679f8f-6clwr" May 14 01:37:17.317510 kubelet[2703]: E0514 01:37:17.317367 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6clwr_kube-system(699fc6c2-1efe-46a9-8707-c8d12e1267b3)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\\\": name \\\"coredns-6f6b679f8f-6clwr_kube-system_699fc6c2-1efe-46a9-8707-c8d12e1267b3_0\\\" is reserved for \\\"3788a387b4f9eb9e3bdbe9aca571d6debd56cfe1b4a8e4dc03ec6270a338f888\\\"\"" pod="kube-system/coredns-6f6b679f8f-6clwr" podUID="699fc6c2-1efe-46a9-8707-c8d12e1267b3" May 14 01:37:17.585338 sshd[4918]: Connection closed by 172.24.4.1 port 51460 May 14 01:37:17.585679 sshd-session[4916]: pam_unix(sshd:session): session closed for user core May 14 01:37:17.594893 systemd[1]: sshd@10-172.24.4.47:22-172.24.4.1:51460.service: Deactivated successfully. May 14 01:37:17.602858 systemd[1]: session-13.scope: Deactivated successfully. May 14 01:37:17.611839 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. May 14 01:37:17.616366 systemd-logind[1457]: Removed session 13. May 14 01:37:18.831007 kubelet[2703]: E0514 01:37:18.830813 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:18.833056 kubelet[2703]: E0514 01:37:18.832056 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:37:18.833056 kubelet[2703]: E0514 01:37:18.832185 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-ft6xv" May 14 01:37:18.833056 kubelet[2703]: E0514 01:37:18.832316 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ft6xv_kube-system(6543ad7a-eb37-4f39-80dc-d640e5e21906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ft6xv_kube-system(6543ad7a-eb37-4f39-80dc-d640e5e21906)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-6f6b679f8f-ft6xv" podUID="6543ad7a-eb37-4f39-80dc-d640e5e21906" May 14 01:37:22.176304 kubelet[2703]: E0514 01:37:22.176201 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:22.615809 systemd[1]: Started sshd@11-172.24.4.47:22-172.24.4.1:51466.service - OpenSSH per-connection server daemon (172.24.4.1:51466). May 14 01:37:23.830375 sshd[4932]: Accepted publickey for core from 172.24.4.1 port 51466 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:23.835344 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:23.848471 systemd-logind[1457]: New session 14 of user core. May 14 01:37:23.861472 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 01:37:24.576282 sshd[4934]: Connection closed by 172.24.4.1 port 51466 May 14 01:37:24.577195 sshd-session[4932]: pam_unix(sshd:session): session closed for user core May 14 01:37:24.582306 systemd[1]: sshd@11-172.24.4.47:22-172.24.4.1:51466.service: Deactivated successfully. May 14 01:37:24.586468 systemd[1]: session-14.scope: Deactivated successfully. May 14 01:37:24.589304 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. May 14 01:37:24.592438 systemd-logind[1457]: Removed session 14. May 14 01:37:24.939183 kubelet[2703]: E0514 01:37:24.938709 2703 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:24.939183 kubelet[2703]: E0514 01:37:24.938893 2703 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:25.831021 kubelet[2703]: E0514 01:37:25.830876 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:25.831841 kubelet[2703]: E0514 01:37:25.831170 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:37:25.831841 kubelet[2703]: E0514 01:37:25.831230 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" May 14 01:37:25.831841 kubelet[2703]: E0514 01:37:25.831434 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-nm95d_calico-apiserver(75f313c4-ceb1-461e-90dc-dfe96fb4abb2)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-nm95d" podUID="75f313c4-ceb1-461e-90dc-dfe96fb4abb2" May 14 01:37:27.176575 kubelet[2703]: E0514 01:37:27.176368 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:28.833716 kubelet[2703]: E0514 01:37:28.832352 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:37:28.833716 kubelet[2703]: E0514 01:37:28.832592 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:37:28.833716 kubelet[2703]: E0514 01:37:28.832637 2703 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" May 14 01:37:28.833716 kubelet[2703]: E0514 01:37:28.832771 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c9fcd687-z2wlk_calico-apiserver(9f48ef50-86f3-4b80-bf67-25a2deb3788b)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-67c9fcd687-z2wlk" podUID="9f48ef50-86f3-4b80-bf67-25a2deb3788b" May 14 01:37:28.963246 containerd[1483]: time="2025-05-14T01:37:28.962869033Z" level=warning msg="container event discarded" container=5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41 type=CONTAINER_CREATED_EVENT May 14 01:37:28.974744 containerd[1483]: time="2025-05-14T01:37:28.974576862Z" level=warning msg="container event discarded" container=5aac7fc27b4233bbc8bc9d050bedda6d5a64e2d3f55494d3092b05b3daeaae41 type=CONTAINER_STARTED_EVENT May 14 01:37:28.974744 containerd[1483]: time="2025-05-14T01:37:28.974675212Z" level=warning msg="container event discarded" container=dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35 type=CONTAINER_CREATED_EVENT May 14 01:37:28.975019 containerd[1483]: time="2025-05-14T01:37:28.974740998Z" level=warning msg="container event discarded" container=dc865d2ce7f6471fb633eb17d453374682224801ac0f494911d1aa0ff4578c35 type=CONTAINER_STARTED_EVENT May 14 01:37:29.077370 containerd[1483]: time="2025-05-14T01:37:29.077250119Z" level=warning msg="container event discarded" container=747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66 type=CONTAINER_CREATED_EVENT May 14 01:37:29.077370 containerd[1483]: time="2025-05-14T01:37:29.077342777Z" level=warning msg="container event discarded" container=67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92 type=CONTAINER_CREATED_EVENT May 14 01:37:29.178879 containerd[1483]: time="2025-05-14T01:37:29.178724001Z" level=warning msg="container event discarded" container=896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a type=CONTAINER_CREATED_EVENT May 14 01:37:29.178879 containerd[1483]: time="2025-05-14T01:37:29.178822179Z" level=warning msg="container event discarded" container=896ce54f4bcc319aca84ad68557d4854a008fb3687e17f50b93c962942b8184a type=CONTAINER_STARTED_EVENT May 14 01:37:29.218415 containerd[1483]: time="2025-05-14T01:37:29.218277341Z" level=warning msg="container event discarded" container=747778ea398e7698bc452f37d241e8c048ca60045320ef3ba47cb64966fdbd66 type=CONTAINER_STARTED_EVENT May 14 01:37:29.218415 containerd[1483]: time="2025-05-14T01:37:29.218385679Z" level=warning msg="container event discarded" container=67eb9e487799a7f1966b9cacbbe8cb01dc31d6656bdb91e75cb38a575a6e0a92 type=CONTAINER_STARTED_EVENT May 14 01:37:29.233257 containerd[1483]: time="2025-05-14T01:37:29.233106677Z" level=warning msg="container event discarded" container=d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92 type=CONTAINER_CREATED_EVENT May 14 01:37:29.365789 containerd[1483]: time="2025-05-14T01:37:29.364176891Z" level=warning msg="container event discarded" container=d5389e43cb38e5935d657263bc05101b3723aeca87ec916032b4817031acfb92 type=CONTAINER_STARTED_EVENT May 14 01:37:29.446452 containerd[1483]: time="2025-05-14T01:37:29.446269514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"7779f00094d03e09ce0ab0b47b6862f40159eccb79f3f6a8792cb78eef9fab57\" pid:4958 exited_at:{seconds:1747186649 nanos:443351953}" May 14 01:37:29.545291 containerd[1483]: time="2025-05-14T01:37:29.545199192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"67d4e99492b2a28e3b2aadd46e44b64c470016c8c8d8a742d75776271ebb734a\" pid:4980 exited_at:{seconds:1747186649 nanos:544613877}" May 14 01:37:29.592775 systemd[1]: Started sshd@12-172.24.4.47:22-172.24.4.1:56170.service - OpenSSH per-connection server daemon (172.24.4.1:56170). May 14 01:37:31.047574 sshd[4993]: Accepted publickey for core from 172.24.4.1 port 56170 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:31.050364 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:31.063520 systemd-logind[1457]: New session 15 of user core. May 14 01:37:31.072500 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 01:37:31.883251 sshd[4995]: Connection closed by 172.24.4.1 port 56170 May 14 01:37:31.886048 sshd-session[4993]: pam_unix(sshd:session): session closed for user core May 14 01:37:31.907792 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. May 14 01:37:31.909454 systemd[1]: sshd@12-172.24.4.47:22-172.24.4.1:56170.service: Deactivated successfully. May 14 01:37:31.918184 systemd[1]: session-15.scope: Deactivated successfully. May 14 01:37:31.921802 systemd-logind[1457]: Removed session 15. May 14 01:37:32.178206 kubelet[2703]: E0514 01:37:32.177237 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:36.910499 systemd[1]: Started sshd@13-172.24.4.47:22-172.24.4.1:55862.service - OpenSSH per-connection server daemon (172.24.4.1:55862). May 14 01:37:37.178951 kubelet[2703]: E0514 01:37:37.178320 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:38.189732 sshd[5011]: Accepted publickey for core from 172.24.4.1 port 55862 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:38.194389 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:38.216109 systemd-logind[1457]: New session 16 of user core. May 14 01:37:38.223039 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 01:37:38.998383 sshd[5013]: Connection closed by 172.24.4.1 port 55862 May 14 01:37:38.999060 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 14 01:37:39.009978 systemd[1]: sshd@13-172.24.4.47:22-172.24.4.1:55862.service: Deactivated successfully. May 14 01:37:39.017673 systemd[1]: session-16.scope: Deactivated successfully. May 14 01:37:39.021273 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. May 14 01:37:39.024379 systemd-logind[1457]: Removed session 16. May 14 01:37:41.476664 containerd[1483]: time="2025-05-14T01:37:41.476220460Z" level=warning msg="container event discarded" container=2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00 type=CONTAINER_CREATED_EVENT May 14 01:37:41.476664 containerd[1483]: time="2025-05-14T01:37:41.476592577Z" level=warning msg="container event discarded" container=2a1b0cd64e89498c5cb316d802f3e6de8673153ec01bb416c02e2d304afd3d00 type=CONTAINER_STARTED_EVENT May 14 01:37:41.505716 containerd[1483]: time="2025-05-14T01:37:41.505531349Z" level=warning msg="container event discarded" container=1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08 type=CONTAINER_CREATED_EVENT May 14 01:37:41.505716 containerd[1483]: time="2025-05-14T01:37:41.505656561Z" level=warning msg="container event discarded" container=1906bf41ede8a2e7c0450d1196d2458a17632539dcf769b7aaa189d1e7f75a08 type=CONTAINER_STARTED_EVENT May 14 01:37:41.517254 containerd[1483]: time="2025-05-14T01:37:41.517035732Z" level=warning msg="container event discarded" container=92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d type=CONTAINER_CREATED_EVENT May 14 01:37:41.584786 containerd[1483]: time="2025-05-14T01:37:41.584598352Z" level=warning msg="container event discarded" container=92ece7435bed4899f0ded86906c6773cabac66e6881cb54e60b651beb943646d type=CONTAINER_STARTED_EVENT May 14 01:37:42.179294 kubelet[2703]: E0514 01:37:42.179151 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:44.051476 systemd[1]: Started sshd@14-172.24.4.47:22-172.24.4.1:37340.service - OpenSSH per-connection server daemon (172.24.4.1:37340). May 14 01:37:44.580676 containerd[1483]: time="2025-05-14T01:37:44.580031300Z" level=warning msg="container event discarded" container=e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0 type=CONTAINER_CREATED_EVENT May 14 01:37:44.645564 containerd[1483]: time="2025-05-14T01:37:44.645419082Z" level=warning msg="container event discarded" container=e8357cc6c0bca3915520ca574c548745fa547535d1fdc3b59fec1787b626e5f0 type=CONTAINER_STARTED_EVENT May 14 01:37:45.392475 sshd[5034]: Accepted publickey for core from 172.24.4.1 port 37340 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:45.396464 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:45.410507 systemd-logind[1457]: New session 17 of user core. May 14 01:37:45.417336 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 01:37:46.159591 sshd[5036]: Connection closed by 172.24.4.1 port 37340 May 14 01:37:46.160862 sshd-session[5034]: pam_unix(sshd:session): session closed for user core May 14 01:37:46.166797 systemd[1]: sshd@14-172.24.4.47:22-172.24.4.1:37340.service: Deactivated successfully. May 14 01:37:46.172396 systemd[1]: session-17.scope: Deactivated successfully. May 14 01:37:46.175513 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. May 14 01:37:46.178573 systemd-logind[1457]: Removed session 17. May 14 01:37:47.180519 kubelet[2703]: E0514 01:37:47.180423 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:48.598668 containerd[1483]: time="2025-05-14T01:37:48.598234787Z" level=warning msg="container event discarded" container=f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144 type=CONTAINER_CREATED_EVENT May 14 01:37:48.598668 containerd[1483]: time="2025-05-14T01:37:48.598639679Z" level=warning msg="container event discarded" container=f9eade5d54a0b99fe737400499d690534f2e138f7f01552b5fb66f9868aa2144 type=CONTAINER_STARTED_EVENT May 14 01:37:48.694753 containerd[1483]: time="2025-05-14T01:37:48.694588374Z" level=warning msg="container event discarded" container=b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa type=CONTAINER_CREATED_EVENT May 14 01:37:48.694753 containerd[1483]: time="2025-05-14T01:37:48.694738794Z" level=warning msg="container event discarded" container=b8bc851be4e92a52acbfda6a4ad891593dfd8e3cd1088a50409b6b17c1dcacfa type=CONTAINER_STARTED_EVENT May 14 01:37:51.177349 systemd[1]: Started sshd@15-172.24.4.47:22-172.24.4.1:37344.service - OpenSSH per-connection server daemon (172.24.4.1:37344). May 14 01:37:51.775201 containerd[1483]: time="2025-05-14T01:37:51.775055163Z" level=warning msg="container event discarded" container=7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52 type=CONTAINER_CREATED_EVENT May 14 01:37:51.857624 containerd[1483]: time="2025-05-14T01:37:51.857496236Z" level=warning msg="container event discarded" container=7e42388305944b3f7fe620fb7fe1a9ec24cd3d1abd1f39ddebc36f88ae9d0c52 type=CONTAINER_STARTED_EVENT May 14 01:37:52.181562 kubelet[2703]: E0514 01:37:52.181457 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:52.468891 sshd[5050]: Accepted publickey for core from 172.24.4.1 port 37344 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:52.472292 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:52.486294 systemd-logind[1457]: New session 18 of user core. May 14 01:37:52.496474 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 01:37:53.355946 sshd[5052]: Connection closed by 172.24.4.1 port 37344 May 14 01:37:53.357321 sshd-session[5050]: pam_unix(sshd:session): session closed for user core May 14 01:37:53.368605 systemd[1]: sshd@15-172.24.4.47:22-172.24.4.1:37344.service: Deactivated successfully. May 14 01:37:53.375571 systemd[1]: session-18.scope: Deactivated successfully. May 14 01:37:53.378310 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. May 14 01:37:53.380017 systemd-logind[1457]: Removed session 18. May 14 01:37:53.993264 containerd[1483]: time="2025-05-14T01:37:53.993051398Z" level=warning msg="container event discarded" container=f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040 type=CONTAINER_CREATED_EVENT May 14 01:37:54.090714 containerd[1483]: time="2025-05-14T01:37:54.090587305Z" level=warning msg="container event discarded" container=f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040 type=CONTAINER_STARTED_EVENT May 14 01:37:54.880117 containerd[1483]: time="2025-05-14T01:37:54.879938022Z" level=warning msg="container event discarded" container=f53c8afee45e33d85e88cc393a587cea34ad980ac476b36381a260d73be0c040 type=CONTAINER_STOPPED_EVENT May 14 01:37:56.230631 containerd[1483]: time="2025-05-14T01:37:56.230211073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"beb9914505fe91f64ec87764470b638726a8034c4012ed964855520aa9f9a7b9\" pid:5076 exited_at:{seconds:1747186676 nanos:229427600}" May 14 01:37:57.182521 kubelet[2703]: E0514 01:37:57.182465 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:37:58.394756 systemd[1]: Started sshd@16-172.24.4.47:22-172.24.4.1:32958.service - OpenSSH per-connection server daemon (172.24.4.1:32958). May 14 01:37:59.504106 containerd[1483]: time="2025-05-14T01:37:59.503324508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"f197c674dd1d6010f21bc7674a9a647820f5b6966da36f3b39e18847e0a3b0fa\" pid:5108 exited_at:{seconds:1747186679 nanos:499714693}" May 14 01:37:59.593962 containerd[1483]: time="2025-05-14T01:37:59.593570996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"24b6abe248f120504bedfff1c90687632fdf434438c7752c2875814eb06c0757\" pid:5126 exited_at:{seconds:1747186679 nanos:593031233}" May 14 01:37:59.670142 sshd[5086]: Accepted publickey for core from 172.24.4.1 port 32958 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:37:59.672879 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:37:59.693595 systemd-logind[1457]: New session 19 of user core. May 14 01:37:59.706052 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 01:38:00.498199 sshd[5141]: Connection closed by 172.24.4.1 port 32958 May 14 01:38:00.500242 sshd-session[5086]: pam_unix(sshd:session): session closed for user core May 14 01:38:00.512639 systemd[1]: sshd@16-172.24.4.47:22-172.24.4.1:32958.service: Deactivated successfully. May 14 01:38:00.520871 systemd[1]: session-19.scope: Deactivated successfully. May 14 01:38:00.523938 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. May 14 01:38:00.528752 systemd-logind[1457]: Removed session 19. May 14 01:38:01.428921 containerd[1483]: time="2025-05-14T01:38:01.428244193Z" level=warning msg="container event discarded" container=508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1 type=CONTAINER_CREATED_EVENT May 14 01:38:01.516779 containerd[1483]: time="2025-05-14T01:38:01.516629617Z" level=warning msg="container event discarded" container=508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1 type=CONTAINER_STARTED_EVENT May 14 01:38:02.183505 kubelet[2703]: E0514 01:38:02.183338 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:03.729036 containerd[1483]: time="2025-05-14T01:38:03.728855378Z" level=warning msg="container event discarded" container=508f966fb1de863515b9a44e83d4910bad873ef75d77f3766dd257e9c352bfc1 type=CONTAINER_STOPPED_EVENT May 14 01:38:05.526812 systemd[1]: Started sshd@17-172.24.4.47:22-172.24.4.1:56504.service - OpenSSH per-connection server daemon (172.24.4.1:56504). May 14 01:38:06.876805 sshd[5159]: Accepted publickey for core from 172.24.4.1 port 56504 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:06.880706 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:06.897642 systemd-logind[1457]: New session 20 of user core. May 14 01:38:06.905403 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 01:38:07.184829 kubelet[2703]: E0514 01:38:07.184564 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:07.738903 sshd[5161]: Connection closed by 172.24.4.1 port 56504 May 14 01:38:07.740490 sshd-session[5159]: pam_unix(sshd:session): session closed for user core May 14 01:38:07.750454 systemd[1]: sshd@17-172.24.4.47:22-172.24.4.1:56504.service: Deactivated successfully. May 14 01:38:07.755599 systemd[1]: session-20.scope: Deactivated successfully. May 14 01:38:07.758606 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. May 14 01:38:07.760153 systemd-logind[1457]: Removed session 20. May 14 01:38:12.187005 kubelet[2703]: E0514 01:38:12.186890 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:12.765903 systemd[1]: Started sshd@18-172.24.4.47:22-172.24.4.1:56516.service - OpenSSH per-connection server daemon (172.24.4.1:56516). May 14 01:38:13.753689 sshd[5175]: Accepted publickey for core from 172.24.4.1 port 56516 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:13.755517 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:13.767671 systemd-logind[1457]: New session 21 of user core. May 14 01:38:13.771347 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 01:38:14.532341 sshd[5177]: Connection closed by 172.24.4.1 port 56516 May 14 01:38:14.531762 sshd-session[5175]: pam_unix(sshd:session): session closed for user core May 14 01:38:14.546366 systemd[1]: sshd@18-172.24.4.47:22-172.24.4.1:56516.service: Deactivated successfully. May 14 01:38:14.554309 systemd[1]: session-21.scope: Deactivated successfully. May 14 01:38:14.562816 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. May 14 01:38:14.566629 systemd-logind[1457]: Removed session 21. May 14 01:38:15.284863 containerd[1483]: time="2025-05-14T01:38:15.284587892Z" level=warning msg="container event discarded" container=78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46 type=CONTAINER_CREATED_EVENT May 14 01:38:15.403339 containerd[1483]: time="2025-05-14T01:38:15.403177680Z" level=warning msg="container event discarded" container=78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46 type=CONTAINER_STARTED_EVENT May 14 01:38:16.197556 containerd[1483]: time="2025-05-14T01:38:16.197413074Z" level=warning msg="container event discarded" container=b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d type=CONTAINER_CREATED_EVENT May 14 01:38:16.197556 containerd[1483]: time="2025-05-14T01:38:16.197518358Z" level=warning msg="container event discarded" container=b50923d15edc3cfff168ebc4b572ee19117fe17d9882b25a82a72b4f9734614d type=CONTAINER_STARTED_EVENT May 14 01:38:17.187937 kubelet[2703]: E0514 01:38:17.187821 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:17.567880 containerd[1483]: time="2025-05-14T01:38:17.567711612Z" level=warning msg="container event discarded" container=0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928 type=CONTAINER_CREATED_EVENT May 14 01:38:17.567880 containerd[1483]: time="2025-05-14T01:38:17.567785435Z" level=warning msg="container event discarded" container=0c2e236e925c2c39314ccd5cea992e682b8117e3a431b650c04e7638ea1fe928 type=CONTAINER_STARTED_EVENT May 14 01:38:19.556713 systemd[1]: Started sshd@19-172.24.4.47:22-172.24.4.1:45714.service - OpenSSH per-connection server daemon (172.24.4.1:45714). May 14 01:38:20.710040 sshd[5191]: Accepted publickey for core from 172.24.4.1 port 45714 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:20.713858 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:20.727861 systemd-logind[1457]: New session 22 of user core. May 14 01:38:20.737497 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 01:38:21.179895 containerd[1483]: time="2025-05-14T01:38:21.179750158Z" level=warning msg="container event discarded" container=6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613 type=CONTAINER_CREATED_EVENT May 14 01:38:21.332316 containerd[1483]: time="2025-05-14T01:38:21.332149229Z" level=warning msg="container event discarded" container=6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613 type=CONTAINER_STARTED_EVENT May 14 01:38:21.460126 sshd[5193]: Connection closed by 172.24.4.1 port 45714 May 14 01:38:21.459094 sshd-session[5191]: pam_unix(sshd:session): session closed for user core May 14 01:38:21.466891 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. May 14 01:38:21.468300 systemd[1]: sshd@19-172.24.4.47:22-172.24.4.1:45714.service: Deactivated successfully. May 14 01:38:21.472503 systemd[1]: session-22.scope: Deactivated successfully. May 14 01:38:21.474601 systemd-logind[1457]: Removed session 22. May 14 01:38:22.189490 kubelet[2703]: E0514 01:38:22.189051 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:23.536556 containerd[1483]: time="2025-05-14T01:38:23.536148901Z" level=warning msg="container event discarded" container=5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179 type=CONTAINER_CREATED_EVENT May 14 01:38:23.644543 containerd[1483]: time="2025-05-14T01:38:23.644356424Z" level=warning msg="container event discarded" container=5d59847a7fb58822bcb7c53e84d4a5e355b41b07f6adba08e413e089a16a6179 type=CONTAINER_STARTED_EVENT May 14 01:38:26.508167 systemd[1]: Started sshd@20-172.24.4.47:22-172.24.4.1:47102.service - OpenSSH per-connection server daemon (172.24.4.1:47102). May 14 01:38:26.943207 containerd[1483]: time="2025-05-14T01:38:26.942733709Z" level=warning msg="container event discarded" container=2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86 type=CONTAINER_CREATED_EVENT May 14 01:38:27.075378 containerd[1483]: time="2025-05-14T01:38:27.075188282Z" level=warning msg="container event discarded" container=2afdc6f00bd49a8bb89bee319890f05198d6ddadcc5ac35401a1a3b23c744b86 type=CONTAINER_STARTED_EVENT May 14 01:38:27.189643 kubelet[2703]: E0514 01:38:27.189507 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:27.886575 sshd[5206]: Accepted publickey for core from 172.24.4.1 port 47102 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:27.890426 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:27.909224 systemd-logind[1457]: New session 23 of user core. May 14 01:38:27.919461 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 01:38:28.763380 sshd[5208]: Connection closed by 172.24.4.1 port 47102 May 14 01:38:28.765798 sshd-session[5206]: pam_unix(sshd:session): session closed for user core May 14 01:38:28.775196 systemd[1]: sshd@20-172.24.4.47:22-172.24.4.1:47102.service: Deactivated successfully. May 14 01:38:28.781980 systemd[1]: session-23.scope: Deactivated successfully. May 14 01:38:28.785257 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. May 14 01:38:28.788500 systemd-logind[1457]: Removed session 23. May 14 01:38:29.495083 containerd[1483]: time="2025-05-14T01:38:29.494978318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"2df56d35d6609ee30c99188bc363cdc62c33309b3141ba9c1a16fbe0763a3781\" pid:5246 exited_at:{seconds:1747186709 nanos:494151395}" May 14 01:38:29.539130 containerd[1483]: time="2025-05-14T01:38:29.538874317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"6b97ce6fbc9aade027e278f5572847649d3dff50e9ebd9ef836389302bb02319\" pid:5241 exited_at:{seconds:1747186709 nanos:538103714}" May 14 01:38:32.190320 kubelet[2703]: E0514 01:38:32.190159 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:33.789682 systemd[1]: Started sshd@21-172.24.4.47:22-172.24.4.1:35612.service - OpenSSH per-connection server daemon (172.24.4.1:35612). May 14 01:38:34.818586 sshd[5266]: Accepted publickey for core from 172.24.4.1 port 35612 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:34.823282 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:34.842704 systemd-logind[1457]: New session 24 of user core. May 14 01:38:34.852503 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 01:38:35.559124 sshd[5270]: Connection closed by 172.24.4.1 port 35612 May 14 01:38:35.557730 sshd-session[5266]: pam_unix(sshd:session): session closed for user core May 14 01:38:35.562201 systemd[1]: sshd@21-172.24.4.47:22-172.24.4.1:35612.service: Deactivated successfully. May 14 01:38:35.568301 systemd[1]: session-24.scope: Deactivated successfully. May 14 01:38:35.571216 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. May 14 01:38:35.573619 systemd-logind[1457]: Removed session 24. May 14 01:38:37.191115 kubelet[2703]: E0514 01:38:37.190945 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:40.579538 systemd[1]: Started sshd@22-172.24.4.47:22-172.24.4.1:35622.service - OpenSSH per-connection server daemon (172.24.4.1:35622). May 14 01:38:41.886433 sshd[5291]: Accepted publickey for core from 172.24.4.1 port 35622 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:41.890321 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:41.905568 systemd-logind[1457]: New session 25 of user core. May 14 01:38:41.917400 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 01:38:42.194342 kubelet[2703]: E0514 01:38:42.192017 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:42.656003 sshd[5295]: Connection closed by 172.24.4.1 port 35622 May 14 01:38:42.657816 sshd-session[5291]: pam_unix(sshd:session): session closed for user core May 14 01:38:42.666460 systemd[1]: sshd@22-172.24.4.47:22-172.24.4.1:35622.service: Deactivated successfully. May 14 01:38:42.672103 systemd[1]: session-25.scope: Deactivated successfully. May 14 01:38:42.675611 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. May 14 01:38:42.678693 systemd-logind[1457]: Removed session 25. May 14 01:38:47.193209 kubelet[2703]: E0514 01:38:47.193122 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:47.681831 systemd[1]: Started sshd@23-172.24.4.47:22-172.24.4.1:45384.service - OpenSSH per-connection server daemon (172.24.4.1:45384). May 14 01:38:48.801482 sshd[5309]: Accepted publickey for core from 172.24.4.1 port 45384 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:48.805979 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:48.821283 systemd-logind[1457]: New session 26 of user core. May 14 01:38:48.828404 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 01:38:49.542579 sshd[5311]: Connection closed by 172.24.4.1 port 45384 May 14 01:38:49.544548 sshd-session[5309]: pam_unix(sshd:session): session closed for user core May 14 01:38:49.552050 systemd[1]: sshd@23-172.24.4.47:22-172.24.4.1:45384.service: Deactivated successfully. May 14 01:38:49.560516 systemd[1]: session-26.scope: Deactivated successfully. May 14 01:38:49.566229 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. May 14 01:38:49.573933 systemd-logind[1457]: Removed session 26. May 14 01:38:52.195123 kubelet[2703]: E0514 01:38:52.193685 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:54.570680 systemd[1]: Started sshd@24-172.24.4.47:22-172.24.4.1:45032.service - OpenSSH per-connection server daemon (172.24.4.1:45032). May 14 01:38:55.781330 sshd[5324]: Accepted publickey for core from 172.24.4.1 port 45032 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:38:55.782670 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:38:55.795144 systemd-logind[1457]: New session 27 of user core. May 14 01:38:55.799613 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 01:38:56.205639 containerd[1483]: time="2025-05-14T01:38:56.205258997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"799784ffedd15b1e39782b6828fad663a040dfbbb5deb36c00b81a3c07cc1e01\" pid:5340 exited_at:{seconds:1747186736 nanos:204742453}" May 14 01:38:56.569052 sshd[5326]: Connection closed by 172.24.4.1 port 45032 May 14 01:38:56.570602 sshd-session[5324]: pam_unix(sshd:session): session closed for user core May 14 01:38:56.578796 systemd[1]: sshd@24-172.24.4.47:22-172.24.4.1:45032.service: Deactivated successfully. May 14 01:38:56.587190 systemd[1]: session-27.scope: Deactivated successfully. May 14 01:38:56.592123 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. May 14 01:38:56.595026 systemd-logind[1457]: Removed session 27. May 14 01:38:57.194672 kubelet[2703]: E0514 01:38:57.194546 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:38:59.466504 containerd[1483]: time="2025-05-14T01:38:59.466449751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"2fe796eb71a94fe74fac6f6be86e8c51b23734a1d161628818358f645605c198\" pid:5374 exited_at:{seconds:1747186739 nanos:465952966}" May 14 01:38:59.529925 containerd[1483]: time="2025-05-14T01:38:59.529684563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"46b94b878a56186594bc2dfd0f92538046c197497698b7108f67d30741c3d12a\" pid:5392 exited_at:{seconds:1747186739 nanos:529251361}" May 14 01:39:01.595361 systemd[1]: Started sshd@25-172.24.4.47:22-172.24.4.1:45036.service - OpenSSH per-connection server daemon (172.24.4.1:45036). May 14 01:39:02.195431 kubelet[2703]: E0514 01:39:02.195345 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:02.972155 sshd[5408]: Accepted publickey for core from 172.24.4.1 port 45036 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:02.975732 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:02.989521 systemd-logind[1457]: New session 28 of user core. May 14 01:39:02.997530 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 01:39:03.673503 sshd[5410]: Connection closed by 172.24.4.1 port 45036 May 14 01:39:03.674015 sshd-session[5408]: pam_unix(sshd:session): session closed for user core May 14 01:39:03.683381 systemd[1]: sshd@25-172.24.4.47:22-172.24.4.1:45036.service: Deactivated successfully. May 14 01:39:03.688052 systemd[1]: session-28.scope: Deactivated successfully. May 14 01:39:03.690662 systemd-logind[1457]: Session 28 logged out. Waiting for processes to exit. May 14 01:39:03.693703 systemd-logind[1457]: Removed session 28. May 14 01:39:07.196549 kubelet[2703]: E0514 01:39:07.196181 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:08.703404 systemd[1]: Started sshd@26-172.24.4.47:22-172.24.4.1:54088.service - OpenSSH per-connection server daemon (172.24.4.1:54088). May 14 01:39:09.890524 sshd[5423]: Accepted publickey for core from 172.24.4.1 port 54088 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:09.894662 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:09.912226 systemd-logind[1457]: New session 29 of user core. May 14 01:39:09.924478 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 01:39:10.508125 sshd[5425]: Connection closed by 172.24.4.1 port 54088 May 14 01:39:10.507129 sshd-session[5423]: pam_unix(sshd:session): session closed for user core May 14 01:39:10.514606 systemd[1]: sshd@26-172.24.4.47:22-172.24.4.1:54088.service: Deactivated successfully. May 14 01:39:10.521739 systemd[1]: session-29.scope: Deactivated successfully. May 14 01:39:10.526862 systemd-logind[1457]: Session 29 logged out. Waiting for processes to exit. May 14 01:39:10.530119 systemd-logind[1457]: Removed session 29. May 14 01:39:12.197786 kubelet[2703]: E0514 01:39:12.197626 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:15.536174 systemd[1]: Started sshd@27-172.24.4.47:22-172.24.4.1:34446.service - OpenSSH per-connection server daemon (172.24.4.1:34446). May 14 01:39:16.864490 sshd[5441]: Accepted publickey for core from 172.24.4.1 port 34446 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:16.868226 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:16.885189 systemd-logind[1457]: New session 30 of user core. May 14 01:39:16.891442 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 01:39:17.198858 kubelet[2703]: E0514 01:39:17.198242 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:17.482195 sshd[5443]: Connection closed by 172.24.4.1 port 34446 May 14 01:39:17.483718 sshd-session[5441]: pam_unix(sshd:session): session closed for user core May 14 01:39:17.492939 systemd[1]: sshd@27-172.24.4.47:22-172.24.4.1:34446.service: Deactivated successfully. May 14 01:39:17.500020 systemd[1]: session-30.scope: Deactivated successfully. May 14 01:39:17.502308 systemd-logind[1457]: Session 30 logged out. Waiting for processes to exit. May 14 01:39:17.505014 systemd-logind[1457]: Removed session 30. May 14 01:39:22.199211 kubelet[2703]: E0514 01:39:22.198569 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:22.511618 systemd[1]: Started sshd@28-172.24.4.47:22-172.24.4.1:34458.service - OpenSSH per-connection server daemon (172.24.4.1:34458). May 14 01:39:23.641596 sshd[5456]: Accepted publickey for core from 172.24.4.1 port 34458 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:23.647429 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:23.659301 systemd-logind[1457]: New session 31 of user core. May 14 01:39:23.665278 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 01:39:24.568705 sshd[5458]: Connection closed by 172.24.4.1 port 34458 May 14 01:39:24.570577 sshd-session[5456]: pam_unix(sshd:session): session closed for user core May 14 01:39:24.579282 systemd[1]: sshd@28-172.24.4.47:22-172.24.4.1:34458.service: Deactivated successfully. May 14 01:39:24.586273 systemd[1]: session-31.scope: Deactivated successfully. May 14 01:39:24.591813 systemd-logind[1457]: Session 31 logged out. Waiting for processes to exit. May 14 01:39:24.594593 systemd-logind[1457]: Removed session 31. May 14 01:39:27.199459 kubelet[2703]: E0514 01:39:27.199322 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:29.490032 update_engine[1458]: I20250514 01:39:29.489269 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 01:39:29.490032 update_engine[1458]: I20250514 01:39:29.489847 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 01:39:29.493304 update_engine[1458]: I20250514 01:39:29.490866 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 01:39:29.494256 update_engine[1458]: I20250514 01:39:29.493029 1458 omaha_request_params.cc:62] Current group set to alpha May 14 01:39:29.494669 update_engine[1458]: I20250514 01:39:29.494614 1458 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495160 1458 update_attempter.cc:643] Scheduling an action processor start. May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495215 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495393 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495504 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495522 1458 omaha_request_action.cc:272] Request: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: May 14 01:39:29.498088 update_engine[1458]: I20250514 01:39:29.495534 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:39:29.498973 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 01:39:29.501505 containerd[1483]: time="2025-05-14T01:39:29.501135033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"dde1568be24b5d7287c5b631aa97eb69af881268756568a23c5aa7c8b8e1b60b\" pid:5482 exited_at:{seconds:1747186769 nanos:494124063}" May 14 01:39:29.504342 update_engine[1458]: I20250514 01:39:29.504220 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:39:29.506122 update_engine[1458]: I20250514 01:39:29.504851 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:39:29.511960 update_engine[1458]: E20250514 01:39:29.511907 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:39:29.512098 update_engine[1458]: I20250514 01:39:29.512042 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 01:39:29.559835 containerd[1483]: time="2025-05-14T01:39:29.559787560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"1602831d4125cea287f8f7f78f794d2b60f058e312f9e4d24d6cd38f9a8e6aa5\" pid:5500 exited_at:{seconds:1747186769 nanos:559424204}" May 14 01:39:29.582434 systemd[1]: Started sshd@29-172.24.4.47:22-172.24.4.1:60720.service - OpenSSH per-connection server daemon (172.24.4.1:60720). May 14 01:39:29.941031 kubelet[2703]: E0514 01:39:29.940764 2703 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:39:29.941031 kubelet[2703]: E0514 01:39:29.940946 2703 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:39:30.954455 sshd[5516]: Accepted publickey for core from 172.24.4.1 port 60720 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:30.956898 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:30.968498 systemd-logind[1457]: New session 32 of user core. May 14 01:39:30.973227 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 01:39:31.972102 sshd[5518]: Connection closed by 172.24.4.1 port 60720 May 14 01:39:31.973466 sshd-session[5516]: pam_unix(sshd:session): session closed for user core May 14 01:39:31.985241 systemd[1]: sshd@29-172.24.4.47:22-172.24.4.1:60720.service: Deactivated successfully. May 14 01:39:31.991015 systemd[1]: session-32.scope: Deactivated successfully. May 14 01:39:31.993272 systemd-logind[1457]: Session 32 logged out. Waiting for processes to exit. May 14 01:39:31.996136 systemd-logind[1457]: Removed session 32. May 14 01:39:32.201098 kubelet[2703]: E0514 01:39:32.200488 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:36.998800 systemd[1]: Started sshd@30-172.24.4.47:22-172.24.4.1:42924.service - OpenSSH per-connection server daemon (172.24.4.1:42924). May 14 01:39:37.201556 kubelet[2703]: E0514 01:39:37.201454 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:38.336036 sshd[5541]: Accepted publickey for core from 172.24.4.1 port 42924 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:38.341232 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:38.367501 systemd-logind[1457]: New session 33 of user core. May 14 01:39:38.377324 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 01:39:39.134193 sshd[5543]: Connection closed by 172.24.4.1 port 42924 May 14 01:39:39.135767 sshd-session[5541]: pam_unix(sshd:session): session closed for user core May 14 01:39:39.145804 systemd[1]: sshd@30-172.24.4.47:22-172.24.4.1:42924.service: Deactivated successfully. May 14 01:39:39.152119 systemd[1]: session-33.scope: Deactivated successfully. May 14 01:39:39.154199 systemd-logind[1457]: Session 33 logged out. Waiting for processes to exit. May 14 01:39:39.156949 systemd-logind[1457]: Removed session 33. May 14 01:39:39.484459 update_engine[1458]: I20250514 01:39:39.483553 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:39:39.484459 update_engine[1458]: I20250514 01:39:39.484316 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:39:39.485546 update_engine[1458]: I20250514 01:39:39.485118 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:39:39.490177 update_engine[1458]: E20250514 01:39:39.490051 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:39:39.490368 update_engine[1458]: I20250514 01:39:39.490309 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 01:39:42.203025 kubelet[2703]: E0514 01:39:42.202484 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:44.163645 systemd[1]: Started sshd@31-172.24.4.47:22-172.24.4.1:43864.service - OpenSSH per-connection server daemon (172.24.4.1:43864). May 14 01:39:45.724150 sshd[5562]: Accepted publickey for core from 172.24.4.1 port 43864 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:45.728682 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:45.742992 systemd-logind[1457]: New session 34 of user core. May 14 01:39:45.751502 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 01:39:46.467229 sshd[5564]: Connection closed by 172.24.4.1 port 43864 May 14 01:39:46.468796 sshd-session[5562]: pam_unix(sshd:session): session closed for user core May 14 01:39:46.476197 systemd[1]: sshd@31-172.24.4.47:22-172.24.4.1:43864.service: Deactivated successfully. May 14 01:39:46.484556 systemd[1]: session-34.scope: Deactivated successfully. May 14 01:39:46.489270 systemd-logind[1457]: Session 34 logged out. Waiting for processes to exit. May 14 01:39:46.492129 systemd-logind[1457]: Removed session 34. May 14 01:39:47.202953 kubelet[2703]: E0514 01:39:47.202810 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:49.484213 update_engine[1458]: I20250514 01:39:49.483888 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:39:49.485145 update_engine[1458]: I20250514 01:39:49.484562 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:39:49.485520 update_engine[1458]: I20250514 01:39:49.485283 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:39:49.490383 update_engine[1458]: E20250514 01:39:49.490281 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:39:49.490597 update_engine[1458]: I20250514 01:39:49.490522 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 01:39:51.488709 systemd[1]: Started sshd@32-172.24.4.47:22-172.24.4.1:43878.service - OpenSSH per-connection server daemon (172.24.4.1:43878). May 14 01:39:52.203854 kubelet[2703]: E0514 01:39:52.203801 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:52.958632 sshd[5576]: Accepted publickey for core from 172.24.4.1 port 43878 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:52.962779 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:52.975659 systemd-logind[1457]: New session 35 of user core. May 14 01:39:52.982424 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 01:39:53.846375 sshd[5578]: Connection closed by 172.24.4.1 port 43878 May 14 01:39:53.847188 sshd-session[5576]: pam_unix(sshd:session): session closed for user core May 14 01:39:53.857305 systemd-logind[1457]: Session 35 logged out. Waiting for processes to exit. May 14 01:39:53.858251 systemd[1]: sshd@32-172.24.4.47:22-172.24.4.1:43878.service: Deactivated successfully. May 14 01:39:53.862471 systemd[1]: session-35.scope: Deactivated successfully. May 14 01:39:53.865519 systemd-logind[1457]: Removed session 35. May 14 01:39:56.193325 containerd[1483]: time="2025-05-14T01:39:56.193262141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"90f6adde374a551d4a98b668c5739e361cc4eb4add2ae26017b485dd2f5e73c7\" pid:5602 exited_at:{seconds:1747186796 nanos:192449718}" May 14 01:39:57.205150 kubelet[2703]: E0514 01:39:57.205013 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:39:58.870650 systemd[1]: Started sshd@33-172.24.4.47:22-172.24.4.1:49918.service - OpenSSH per-connection server daemon (172.24.4.1:49918). May 14 01:39:59.486446 containerd[1483]: time="2025-05-14T01:39:59.486364527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"7d09965ac3d91575ff22647c2ff0d100977df56aceb0854dfdddca672f915f66\" pid:5627 exited_at:{seconds:1747186799 nanos:485988225}" May 14 01:39:59.489151 update_engine[1458]: I20250514 01:39:59.488132 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:39:59.489151 update_engine[1458]: I20250514 01:39:59.488413 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:39:59.489151 update_engine[1458]: I20250514 01:39:59.488732 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:39:59.495165 update_engine[1458]: E20250514 01:39:59.494134 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494211 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494221 1458 omaha_request_action.cc:617] Omaha request response: May 14 01:39:59.495165 update_engine[1458]: E20250514 01:39:59.494324 1458 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494375 1458 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494385 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494390 1458 update_attempter.cc:306] Processing Done. May 14 01:39:59.495165 update_engine[1458]: E20250514 01:39:59.494441 1458 update_attempter.cc:619] Update failed. May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494464 1458 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494472 1458 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494481 1458 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494578 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494606 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 01:39:59.495165 update_engine[1458]: I20250514 01:39:59.494616 1458 omaha_request_action.cc:272] Request: May 14 01:39:59.495165 update_engine[1458]: May 14 01:39:59.495165 update_engine[1458]: May 14 01:39:59.495697 update_engine[1458]: May 14 01:39:59.495697 update_engine[1458]: May 14 01:39:59.495697 update_engine[1458]: May 14 01:39:59.495697 update_engine[1458]: May 14 01:39:59.495697 update_engine[1458]: I20250514 01:39:59.494623 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:39:59.495697 update_engine[1458]: I20250514 01:39:59.494807 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:39:59.496119 update_engine[1458]: I20250514 01:39:59.496045 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:39:59.496293 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 01:39:59.502183 update_engine[1458]: E20250514 01:39:59.501282 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501358 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501370 1458 omaha_request_action.cc:617] Omaha request response: May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501377 1458 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501384 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501389 1458 update_attempter.cc:306] Processing Done. May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501395 1458 update_attempter.cc:310] Error event sent. May 14 01:39:59.502183 update_engine[1458]: I20250514 01:39:59.501415 1458 update_check_scheduler.cc:74] Next update check in 43m32s May 14 01:39:59.503343 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 01:39:59.530663 containerd[1483]: time="2025-05-14T01:39:59.530596231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"a0829da846cb5f746de3c178d63285f8dc0c4f0e644cb688480e1085adbfcb2e\" pid:5646 exited_at:{seconds:1747186799 nanos:530197187}" May 14 01:39:59.930396 sshd[5612]: Accepted publickey for core from 172.24.4.1 port 49918 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:39:59.933686 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:39:59.947794 systemd-logind[1457]: New session 36 of user core. May 14 01:39:59.953644 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 01:40:00.800374 sshd[5660]: Connection closed by 172.24.4.1 port 49918 May 14 01:40:00.801937 sshd-session[5612]: pam_unix(sshd:session): session closed for user core May 14 01:40:00.811607 systemd[1]: sshd@33-172.24.4.47:22-172.24.4.1:49918.service: Deactivated successfully. May 14 01:40:00.817486 systemd[1]: session-36.scope: Deactivated successfully. May 14 01:40:00.819984 systemd-logind[1457]: Session 36 logged out. Waiting for processes to exit. May 14 01:40:00.824494 systemd-logind[1457]: Removed session 36. May 14 01:40:02.205688 kubelet[2703]: E0514 01:40:02.205614 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:05.828019 systemd[1]: Started sshd@34-172.24.4.47:22-172.24.4.1:42268.service - OpenSSH per-connection server daemon (172.24.4.1:42268). May 14 01:40:06.977701 sshd[5673]: Accepted publickey for core from 172.24.4.1 port 42268 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:06.981472 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:06.994212 systemd-logind[1457]: New session 37 of user core. May 14 01:40:07.011448 systemd[1]: Started session-37.scope - Session 37 of User core. May 14 01:40:07.206523 kubelet[2703]: E0514 01:40:07.206412 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:07.758784 sshd[5675]: Connection closed by 172.24.4.1 port 42268 May 14 01:40:07.760563 sshd-session[5673]: pam_unix(sshd:session): session closed for user core May 14 01:40:07.768238 systemd-logind[1457]: Session 37 logged out. Waiting for processes to exit. May 14 01:40:07.768689 systemd[1]: sshd@34-172.24.4.47:22-172.24.4.1:42268.service: Deactivated successfully. May 14 01:40:07.775496 systemd[1]: session-37.scope: Deactivated successfully. May 14 01:40:07.781161 systemd-logind[1457]: Removed session 37. May 14 01:40:12.207030 kubelet[2703]: E0514 01:40:12.206931 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:12.782051 systemd[1]: Started sshd@35-172.24.4.47:22-172.24.4.1:42272.service - OpenSSH per-connection server daemon (172.24.4.1:42272). May 14 01:40:13.866386 sshd[5689]: Accepted publickey for core from 172.24.4.1 port 42272 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:13.869563 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:13.883647 systemd-logind[1457]: New session 38 of user core. May 14 01:40:13.891390 systemd[1]: Started session-38.scope - Session 38 of User core. May 14 01:40:14.794629 sshd[5691]: Connection closed by 172.24.4.1 port 42272 May 14 01:40:14.795978 sshd-session[5689]: pam_unix(sshd:session): session closed for user core May 14 01:40:14.803810 systemd-logind[1457]: Session 38 logged out. Waiting for processes to exit. May 14 01:40:14.805259 systemd[1]: sshd@35-172.24.4.47:22-172.24.4.1:42272.service: Deactivated successfully. May 14 01:40:14.810989 systemd[1]: session-38.scope: Deactivated successfully. May 14 01:40:14.816954 systemd-logind[1457]: Removed session 38. May 14 01:40:17.207515 kubelet[2703]: E0514 01:40:17.207352 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:19.817177 systemd[1]: Started sshd@36-172.24.4.47:22-172.24.4.1:39000.service - OpenSSH per-connection server daemon (172.24.4.1:39000). May 14 01:40:20.837278 sshd[5704]: Accepted publickey for core from 172.24.4.1 port 39000 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:20.840369 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:20.852782 systemd-logind[1457]: New session 39 of user core. May 14 01:40:20.861446 systemd[1]: Started session-39.scope - Session 39 of User core. May 14 01:40:21.637369 sshd[5706]: Connection closed by 172.24.4.1 port 39000 May 14 01:40:21.638022 sshd-session[5704]: pam_unix(sshd:session): session closed for user core May 14 01:40:21.647695 systemd[1]: sshd@36-172.24.4.47:22-172.24.4.1:39000.service: Deactivated successfully. May 14 01:40:21.652879 systemd[1]: session-39.scope: Deactivated successfully. May 14 01:40:21.654701 systemd-logind[1457]: Session 39 logged out. Waiting for processes to exit. May 14 01:40:21.657220 systemd-logind[1457]: Removed session 39. May 14 01:40:22.208271 kubelet[2703]: E0514 01:40:22.208167 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:26.650957 systemd[1]: Started sshd@37-172.24.4.47:22-172.24.4.1:44718.service - OpenSSH per-connection server daemon (172.24.4.1:44718). May 14 01:40:27.209289 kubelet[2703]: E0514 01:40:27.209117 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:27.890161 sshd[5718]: Accepted publickey for core from 172.24.4.1 port 44718 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:27.895034 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:27.909802 systemd-logind[1457]: New session 40 of user core. May 14 01:40:27.919424 systemd[1]: Started session-40.scope - Session 40 of User core. May 14 01:40:28.663147 sshd[5720]: Connection closed by 172.24.4.1 port 44718 May 14 01:40:28.664879 sshd-session[5718]: pam_unix(sshd:session): session closed for user core May 14 01:40:28.678047 systemd[1]: sshd@37-172.24.4.47:22-172.24.4.1:44718.service: Deactivated successfully. May 14 01:40:28.691771 systemd[1]: session-40.scope: Deactivated successfully. May 14 01:40:28.696191 systemd-logind[1457]: Session 40 logged out. Waiting for processes to exit. May 14 01:40:28.701658 systemd-logind[1457]: Removed session 40. May 14 01:40:29.487229 containerd[1483]: time="2025-05-14T01:40:29.487174481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"9cf8787f3bd98e575dc06ab5228e62bc60ad765157d75f929a8fdfeaa001c624\" pid:5744 exited_at:{seconds:1747186829 nanos:486128497}" May 14 01:40:29.528883 containerd[1483]: time="2025-05-14T01:40:29.528820492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"d6a715a5b5ed528cc49a1c7580dc727710244bc23ec18275cfdf8ce7f1731640\" pid:5765 exited_at:{seconds:1747186829 nanos:528162341}" May 14 01:40:32.210052 kubelet[2703]: E0514 01:40:32.209623 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:33.682818 systemd[1]: Started sshd@38-172.24.4.47:22-172.24.4.1:43698.service - OpenSSH per-connection server daemon (172.24.4.1:43698). May 14 01:40:34.887744 sshd[5781]: Accepted publickey for core from 172.24.4.1 port 43698 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:34.891419 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:34.906786 systemd-logind[1457]: New session 41 of user core. May 14 01:40:34.910150 systemd[1]: Started session-41.scope - Session 41 of User core. May 14 01:40:35.786126 sshd[5785]: Connection closed by 172.24.4.1 port 43698 May 14 01:40:35.785424 sshd-session[5781]: pam_unix(sshd:session): session closed for user core May 14 01:40:35.792467 systemd[1]: sshd@38-172.24.4.47:22-172.24.4.1:43698.service: Deactivated successfully. May 14 01:40:35.798424 systemd[1]: session-41.scope: Deactivated successfully. May 14 01:40:35.801985 systemd-logind[1457]: Session 41 logged out. Waiting for processes to exit. May 14 01:40:35.804980 systemd-logind[1457]: Removed session 41. May 14 01:40:37.210470 kubelet[2703]: E0514 01:40:37.210347 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:40.823322 systemd[1]: Started sshd@39-172.24.4.47:22-172.24.4.1:43702.service - OpenSSH per-connection server daemon (172.24.4.1:43702). May 14 01:40:41.962994 sshd[5797]: Accepted publickey for core from 172.24.4.1 port 43702 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:41.966573 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:41.980901 systemd-logind[1457]: New session 42 of user core. May 14 01:40:41.991514 systemd[1]: Started session-42.scope - Session 42 of User core. May 14 01:40:42.210983 kubelet[2703]: E0514 01:40:42.210857 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:42.804498 sshd[5801]: Connection closed by 172.24.4.1 port 43702 May 14 01:40:42.804651 sshd-session[5797]: pam_unix(sshd:session): session closed for user core May 14 01:40:42.812414 systemd-logind[1457]: Session 42 logged out. Waiting for processes to exit. May 14 01:40:42.813594 systemd[1]: sshd@39-172.24.4.47:22-172.24.4.1:43702.service: Deactivated successfully. May 14 01:40:42.819479 systemd[1]: session-42.scope: Deactivated successfully. May 14 01:40:42.825043 systemd-logind[1457]: Removed session 42. May 14 01:40:47.211377 kubelet[2703]: E0514 01:40:47.211292 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:47.828835 systemd[1]: Started sshd@40-172.24.4.47:22-172.24.4.1:57162.service - OpenSSH per-connection server daemon (172.24.4.1:57162). May 14 01:40:48.989280 sshd[5814]: Accepted publickey for core from 172.24.4.1 port 57162 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:48.993791 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:49.009643 systemd-logind[1457]: New session 43 of user core. May 14 01:40:49.019509 systemd[1]: Started session-43.scope - Session 43 of User core. May 14 01:40:49.870342 sshd[5817]: Connection closed by 172.24.4.1 port 57162 May 14 01:40:49.872303 sshd-session[5814]: pam_unix(sshd:session): session closed for user core May 14 01:40:49.879990 systemd[1]: sshd@40-172.24.4.47:22-172.24.4.1:57162.service: Deactivated successfully. May 14 01:40:49.885135 systemd[1]: session-43.scope: Deactivated successfully. May 14 01:40:49.889705 systemd-logind[1457]: Session 43 logged out. Waiting for processes to exit. May 14 01:40:49.892360 systemd-logind[1457]: Removed session 43. May 14 01:40:52.211823 kubelet[2703]: E0514 01:40:52.211683 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:54.901660 systemd[1]: Started sshd@41-172.24.4.47:22-172.24.4.1:51740.service - OpenSSH per-connection server daemon (172.24.4.1:51740). May 14 01:40:56.109543 sshd[5831]: Accepted publickey for core from 172.24.4.1 port 51740 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:40:56.113644 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:40:56.134102 systemd-logind[1457]: New session 44 of user core. May 14 01:40:56.141496 systemd[1]: Started session-44.scope - Session 44 of User core. May 14 01:40:56.222159 containerd[1483]: time="2025-05-14T01:40:56.221999064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"9626d433cb89b12b7d4b6df48f9ddb657a8141208ebbcea02df6dc918848bb85\" pid:5849 exited_at:{seconds:1747186856 nanos:216748186}" May 14 01:40:56.840484 sshd[5842]: Connection closed by 172.24.4.1 port 51740 May 14 01:40:56.841491 sshd-session[5831]: pam_unix(sshd:session): session closed for user core May 14 01:40:56.846366 systemd[1]: sshd@41-172.24.4.47:22-172.24.4.1:51740.service: Deactivated successfully. May 14 01:40:56.850186 systemd[1]: session-44.scope: Deactivated successfully. May 14 01:40:56.853367 systemd-logind[1457]: Session 44 logged out. Waiting for processes to exit. May 14 01:40:56.856231 systemd-logind[1457]: Removed session 44. May 14 01:40:57.212207 kubelet[2703]: E0514 01:40:57.212111 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:40:59.469815 containerd[1483]: time="2025-05-14T01:40:59.469644759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"275503cf7dae54f7bf6ce25a5889ea7915c0577fc2743927dbba44c3830ebf17\" pid:5879 exited_at:{seconds:1747186859 nanos:469349270}" May 14 01:40:59.511409 containerd[1483]: time="2025-05-14T01:40:59.511360737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"a4d0cab3a7d55768197d867cc68a91b32219a98c1475f305b2cdea509b83fe4d\" pid:5898 exited_at:{seconds:1747186859 nanos:510902065}" May 14 01:41:01.866960 systemd[1]: Started sshd@42-172.24.4.47:22-172.24.4.1:51744.service - OpenSSH per-connection server daemon (172.24.4.1:51744). May 14 01:41:02.213624 kubelet[2703]: E0514 01:41:02.213392 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:03.055266 sshd[5913]: Accepted publickey for core from 172.24.4.1 port 51744 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:03.058536 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:03.071815 systemd-logind[1457]: New session 45 of user core. May 14 01:41:03.083437 systemd[1]: Started session-45.scope - Session 45 of User core. May 14 01:41:03.883959 sshd[5915]: Connection closed by 172.24.4.1 port 51744 May 14 01:41:03.885513 sshd-session[5913]: pam_unix(sshd:session): session closed for user core May 14 01:41:03.891989 systemd[1]: sshd@42-172.24.4.47:22-172.24.4.1:51744.service: Deactivated successfully. May 14 01:41:03.896779 systemd[1]: session-45.scope: Deactivated successfully. May 14 01:41:03.901494 systemd-logind[1457]: Session 45 logged out. Waiting for processes to exit. May 14 01:41:03.904235 systemd-logind[1457]: Removed session 45. May 14 01:41:07.214701 kubelet[2703]: E0514 01:41:07.214610 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:08.916129 systemd[1]: Started sshd@43-172.24.4.47:22-172.24.4.1:32932.service - OpenSSH per-connection server daemon (172.24.4.1:32932). May 14 01:41:10.097035 sshd[5928]: Accepted publickey for core from 172.24.4.1 port 32932 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:10.100680 sshd-session[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:10.115883 systemd-logind[1457]: New session 46 of user core. May 14 01:41:10.130526 systemd[1]: Started session-46.scope - Session 46 of User core. May 14 01:41:11.130272 sshd[5930]: Connection closed by 172.24.4.1 port 32932 May 14 01:41:11.130006 sshd-session[5928]: pam_unix(sshd:session): session closed for user core May 14 01:41:11.141693 systemd[1]: sshd@43-172.24.4.47:22-172.24.4.1:32932.service: Deactivated successfully. May 14 01:41:11.147498 systemd[1]: session-46.scope: Deactivated successfully. May 14 01:41:11.150818 systemd-logind[1457]: Session 46 logged out. Waiting for processes to exit. May 14 01:41:11.153647 systemd-logind[1457]: Removed session 46. May 14 01:41:12.217950 kubelet[2703]: E0514 01:41:12.217829 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:16.150746 systemd[1]: Started sshd@44-172.24.4.47:22-172.24.4.1:55906.service - OpenSSH per-connection server daemon (172.24.4.1:55906). May 14 01:41:17.218605 kubelet[2703]: E0514 01:41:17.218523 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:17.609177 sshd[5958]: Accepted publickey for core from 172.24.4.1 port 55906 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:17.612479 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:17.625641 systemd-logind[1457]: New session 47 of user core. May 14 01:41:17.631401 systemd[1]: Started session-47.scope - Session 47 of User core. May 14 01:41:18.386117 sshd[5960]: Connection closed by 172.24.4.1 port 55906 May 14 01:41:18.386551 sshd-session[5958]: pam_unix(sshd:session): session closed for user core May 14 01:41:18.403478 systemd[1]: sshd@44-172.24.4.47:22-172.24.4.1:55906.service: Deactivated successfully. May 14 01:41:18.408525 systemd[1]: session-47.scope: Deactivated successfully. May 14 01:41:18.410926 systemd-logind[1457]: Session 47 logged out. Waiting for processes to exit. May 14 01:41:18.417861 systemd[1]: Started sshd@45-172.24.4.47:22-172.24.4.1:55916.service - OpenSSH per-connection server daemon (172.24.4.1:55916). May 14 01:41:18.420465 systemd-logind[1457]: Removed session 47. May 14 01:41:19.767337 sshd[5972]: Accepted publickey for core from 172.24.4.1 port 55916 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:19.770522 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:19.785726 systemd-logind[1457]: New session 48 of user core. May 14 01:41:19.796438 systemd[1]: Started session-48.scope - Session 48 of User core. May 14 01:41:20.784532 sshd[5975]: Connection closed by 172.24.4.1 port 55916 May 14 01:41:20.783957 sshd-session[5972]: pam_unix(sshd:session): session closed for user core May 14 01:41:20.805468 systemd[1]: sshd@45-172.24.4.47:22-172.24.4.1:55916.service: Deactivated successfully. May 14 01:41:20.810607 systemd[1]: session-48.scope: Deactivated successfully. May 14 01:41:20.813479 systemd-logind[1457]: Session 48 logged out. Waiting for processes to exit. May 14 01:41:20.820688 systemd[1]: Started sshd@46-172.24.4.47:22-172.24.4.1:55930.service - OpenSSH per-connection server daemon (172.24.4.1:55930). May 14 01:41:20.824558 systemd-logind[1457]: Removed session 48. May 14 01:41:22.219676 kubelet[2703]: E0514 01:41:22.219540 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:22.227049 sshd[5985]: Accepted publickey for core from 172.24.4.1 port 55930 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:22.230420 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:22.245196 systemd-logind[1457]: New session 49 of user core. May 14 01:41:22.253507 systemd[1]: Started session-49.scope - Session 49 of User core. May 14 01:41:22.851295 sshd[5988]: Connection closed by 172.24.4.1 port 55930 May 14 01:41:22.851125 sshd-session[5985]: pam_unix(sshd:session): session closed for user core May 14 01:41:22.857946 systemd-logind[1457]: Session 49 logged out. Waiting for processes to exit. May 14 01:41:22.858869 systemd[1]: sshd@46-172.24.4.47:22-172.24.4.1:55930.service: Deactivated successfully. May 14 01:41:22.863530 systemd[1]: session-49.scope: Deactivated successfully. May 14 01:41:22.866702 systemd-logind[1457]: Removed session 49. May 14 01:41:27.219895 kubelet[2703]: E0514 01:41:27.219796 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:27.883790 systemd[1]: Started sshd@47-172.24.4.47:22-172.24.4.1:55720.service - OpenSSH per-connection server daemon (172.24.4.1:55720). May 14 01:41:29.323540 sshd[6000]: Accepted publickey for core from 172.24.4.1 port 55720 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:29.326491 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:29.341200 systemd-logind[1457]: New session 50 of user core. May 14 01:41:29.349477 systemd[1]: Started session-50.scope - Session 50 of User core. May 14 01:41:29.463971 containerd[1483]: time="2025-05-14T01:41:29.463923574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"4f78b4cf1490e79617c244d7bb000683dd5355afd96fc4e65d89262d97fa546e\" pid:6016 exited_at:{seconds:1747186889 nanos:463596171}" May 14 01:41:29.511410 containerd[1483]: time="2025-05-14T01:41:29.511340745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"f74bef6a750fd389094b91436fe748325762c87fb83f88ff296cf1a624b69968\" pid:6034 exited_at:{seconds:1747186889 nanos:510446099}" May 14 01:41:30.102805 sshd[6002]: Connection closed by 172.24.4.1 port 55720 May 14 01:41:30.104029 sshd-session[6000]: pam_unix(sshd:session): session closed for user core May 14 01:41:30.121054 systemd[1]: sshd@47-172.24.4.47:22-172.24.4.1:55720.service: Deactivated successfully. May 14 01:41:30.125682 systemd[1]: session-50.scope: Deactivated successfully. May 14 01:41:30.128830 systemd-logind[1457]: Session 50 logged out. Waiting for processes to exit. May 14 01:41:30.134842 systemd[1]: Started sshd@48-172.24.4.47:22-172.24.4.1:55734.service - OpenSSH per-connection server daemon (172.24.4.1:55734). May 14 01:41:30.138596 systemd-logind[1457]: Removed session 50. May 14 01:41:31.280896 sshd[6059]: Accepted publickey for core from 172.24.4.1 port 55734 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:31.283836 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:31.298208 systemd-logind[1457]: New session 51 of user core. May 14 01:41:31.306427 systemd[1]: Started session-51.scope - Session 51 of User core. May 14 01:41:32.221023 kubelet[2703]: E0514 01:41:32.220513 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:32.387686 sshd[6062]: Connection closed by 172.24.4.1 port 55734 May 14 01:41:32.385657 sshd-session[6059]: pam_unix(sshd:session): session closed for user core May 14 01:41:32.411048 systemd[1]: sshd@48-172.24.4.47:22-172.24.4.1:55734.service: Deactivated successfully. May 14 01:41:32.419224 systemd[1]: session-51.scope: Deactivated successfully. May 14 01:41:32.426507 systemd-logind[1457]: Session 51 logged out. Waiting for processes to exit. May 14 01:41:32.431914 systemd[1]: Started sshd@49-172.24.4.47:22-172.24.4.1:55738.service - OpenSSH per-connection server daemon (172.24.4.1:55738). May 14 01:41:32.436666 systemd-logind[1457]: Removed session 51. May 14 01:41:33.725402 sshd[6072]: Accepted publickey for core from 172.24.4.1 port 55738 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:33.728692 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:33.743179 systemd-logind[1457]: New session 52 of user core. May 14 01:41:33.747403 systemd[1]: Started session-52.scope - Session 52 of User core. May 14 01:41:34.941915 kubelet[2703]: E0514 01:41:34.941726 2703 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:41:34.941915 kubelet[2703]: E0514 01:41:34.941836 2703 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:41:37.030676 sshd[6075]: Connection closed by 172.24.4.1 port 55738 May 14 01:41:37.033266 sshd-session[6072]: pam_unix(sshd:session): session closed for user core May 14 01:41:37.047669 systemd[1]: sshd@49-172.24.4.47:22-172.24.4.1:55738.service: Deactivated successfully. May 14 01:41:37.051685 systemd[1]: session-52.scope: Deactivated successfully. May 14 01:41:37.052244 systemd[1]: session-52.scope: Consumed 1.062s CPU time, 60.1M memory peak. May 14 01:41:37.053453 systemd-logind[1457]: Session 52 logged out. Waiting for processes to exit. May 14 01:41:37.059782 systemd[1]: Started sshd@50-172.24.4.47:22-172.24.4.1:51026.service - OpenSSH per-connection server daemon (172.24.4.1:51026). May 14 01:41:37.062247 systemd-logind[1457]: Removed session 52. May 14 01:41:37.232033 kubelet[2703]: E0514 01:41:37.231949 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:38.281962 sshd[6093]: Accepted publickey for core from 172.24.4.1 port 51026 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:38.286336 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:38.299949 systemd-logind[1457]: New session 53 of user core. May 14 01:41:38.310438 systemd[1]: Started session-53.scope - Session 53 of User core. May 14 01:41:39.275123 sshd[6096]: Connection closed by 172.24.4.1 port 51026 May 14 01:41:39.276672 sshd-session[6093]: pam_unix(sshd:session): session closed for user core May 14 01:41:39.292334 systemd[1]: sshd@50-172.24.4.47:22-172.24.4.1:51026.service: Deactivated successfully. May 14 01:41:39.297174 systemd[1]: session-53.scope: Deactivated successfully. May 14 01:41:39.302502 systemd-logind[1457]: Session 53 logged out. Waiting for processes to exit. May 14 01:41:39.306863 systemd[1]: Started sshd@51-172.24.4.47:22-172.24.4.1:51030.service - OpenSSH per-connection server daemon (172.24.4.1:51030). May 14 01:41:39.312786 systemd-logind[1457]: Removed session 53. May 14 01:41:40.596849 sshd[6113]: Accepted publickey for core from 172.24.4.1 port 51030 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:40.601272 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:40.624779 systemd-logind[1457]: New session 54 of user core. May 14 01:41:40.630543 systemd[1]: Started session-54.scope - Session 54 of User core. May 14 01:41:41.378676 sshd[6116]: Connection closed by 172.24.4.1 port 51030 May 14 01:41:41.378404 sshd-session[6113]: pam_unix(sshd:session): session closed for user core May 14 01:41:41.386943 systemd-logind[1457]: Session 54 logged out. Waiting for processes to exit. May 14 01:41:41.391549 systemd[1]: sshd@51-172.24.4.47:22-172.24.4.1:51030.service: Deactivated successfully. May 14 01:41:41.400746 systemd[1]: session-54.scope: Deactivated successfully. May 14 01:41:41.404628 systemd-logind[1457]: Removed session 54. May 14 01:41:42.233352 kubelet[2703]: E0514 01:41:42.233237 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:46.398851 systemd[1]: Started sshd@52-172.24.4.47:22-172.24.4.1:50752.service - OpenSSH per-connection server daemon (172.24.4.1:50752). May 14 01:41:47.233749 kubelet[2703]: E0514 01:41:47.233597 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:47.652636 sshd[6135]: Accepted publickey for core from 172.24.4.1 port 50752 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:47.655569 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:47.668557 systemd-logind[1457]: New session 55 of user core. May 14 01:41:47.677416 systemd[1]: Started session-55.scope - Session 55 of User core. May 14 01:41:48.620221 sshd[6137]: Connection closed by 172.24.4.1 port 50752 May 14 01:41:48.621175 sshd-session[6135]: pam_unix(sshd:session): session closed for user core May 14 01:41:48.628842 systemd[1]: sshd@52-172.24.4.47:22-172.24.4.1:50752.service: Deactivated successfully. May 14 01:41:48.635142 systemd[1]: session-55.scope: Deactivated successfully. May 14 01:41:48.639156 systemd-logind[1457]: Session 55 logged out. Waiting for processes to exit. May 14 01:41:48.641630 systemd-logind[1457]: Removed session 55. May 14 01:41:52.233928 kubelet[2703]: E0514 01:41:52.233839 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:53.649643 systemd[1]: Started sshd@53-172.24.4.47:22-172.24.4.1:52686.service - OpenSSH per-connection server daemon (172.24.4.1:52686). May 14 01:41:55.005096 sshd[6148]: Accepted publickey for core from 172.24.4.1 port 52686 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:41:55.006408 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:41:55.014551 systemd-logind[1457]: New session 56 of user core. May 14 01:41:55.018221 systemd[1]: Started session-56.scope - Session 56 of User core. May 14 01:41:55.840218 sshd[6150]: Connection closed by 172.24.4.1 port 52686 May 14 01:41:55.841983 sshd-session[6148]: pam_unix(sshd:session): session closed for user core May 14 01:41:55.849896 systemd[1]: sshd@53-172.24.4.47:22-172.24.4.1:52686.service: Deactivated successfully. May 14 01:41:55.862052 systemd[1]: session-56.scope: Deactivated successfully. May 14 01:41:55.867649 systemd-logind[1457]: Session 56 logged out. Waiting for processes to exit. May 14 01:41:55.870740 systemd-logind[1457]: Removed session 56. May 14 01:41:56.174156 containerd[1483]: time="2025-05-14T01:41:56.174034694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"16c94a9cd740e6bdeca4c1253039d740496012bd8455db6d601e5c161c04d9d3\" pid:6177 exited_at:{seconds:1747186916 nanos:172579462}" May 14 01:41:57.235219 kubelet[2703]: E0514 01:41:57.235168 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:41:59.443346 containerd[1483]: time="2025-05-14T01:41:59.443182711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"5a22202a615cc81d33e414481b4806513d4ddaa6722676d421badf677c1a3ca0\" pid:6198 exited_at:{seconds:1747186919 nanos:438678219}" May 14 01:41:59.507886 containerd[1483]: time="2025-05-14T01:41:59.507780610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"2b7339bdc0e2f81e4825a214a2294baa2e8871f417e0419e366435728040a04e\" pid:6220 exited_at:{seconds:1747186919 nanos:507471808}" May 14 01:42:00.866424 systemd[1]: Started sshd@54-172.24.4.47:22-172.24.4.1:52698.service - OpenSSH per-connection server daemon (172.24.4.1:52698). May 14 01:42:02.126543 sshd[6233]: Accepted publickey for core from 172.24.4.1 port 52698 ssh2: RSA SHA256:qWVUa4j2Mk837kUhXqOo5bNJ4Bss/ICvMhdgTGUV2dI May 14 01:42:02.130035 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:42:02.143406 systemd-logind[1457]: New session 57 of user core. May 14 01:42:02.153385 systemd[1]: Started session-57.scope - Session 57 of User core. May 14 01:42:02.236440 kubelet[2703]: E0514 01:42:02.236366 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:02.857955 sshd[6235]: Connection closed by 172.24.4.1 port 52698 May 14 01:42:02.859603 sshd-session[6233]: pam_unix(sshd:session): session closed for user core May 14 01:42:02.868532 systemd-logind[1457]: Session 57 logged out. Waiting for processes to exit. May 14 01:42:02.869272 systemd[1]: sshd@54-172.24.4.47:22-172.24.4.1:52698.service: Deactivated successfully. May 14 01:42:02.876197 systemd[1]: session-57.scope: Deactivated successfully. May 14 01:42:02.883207 systemd-logind[1457]: Removed session 57. May 14 01:42:07.237936 kubelet[2703]: E0514 01:42:07.237829 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:12.238662 kubelet[2703]: E0514 01:42:12.238553 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:17.239411 kubelet[2703]: E0514 01:42:17.239318 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:22.240874 kubelet[2703]: E0514 01:42:22.240765 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:27.241889 kubelet[2703]: E0514 01:42:27.241782 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:29.513833 containerd[1483]: time="2025-05-14T01:42:29.513727552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"1a3e350818f869baecf9d46f1c78298c0ad6b6e0735e1895bacd432c6146eb25\" pid:6260 exited_at:{seconds:1747186949 nanos:513348553}" May 14 01:42:29.549883 containerd[1483]: time="2025-05-14T01:42:29.549830429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"705ab33893723c8433294d9a31620069a7275320136d9a1261925160793510a6\" pid:6277 exited_at:{seconds:1747186949 nanos:549267753}" May 14 01:42:32.242945 kubelet[2703]: E0514 01:42:32.242867 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:37.244002 kubelet[2703]: E0514 01:42:37.243937 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:42.244736 kubelet[2703]: E0514 01:42:42.244654 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:47.245833 kubelet[2703]: E0514 01:42:47.245765 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:52.245988 kubelet[2703]: E0514 01:42:52.245926 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:56.218135 containerd[1483]: time="2025-05-14T01:42:56.217975751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"ae46b9523c8dd7192e38544bff7dc9c3aff071a7c2337703cb73382d11f79c8a\" pid:6339 exited_at:{seconds:1747186976 nanos:217567345}" May 14 01:42:57.247395 kubelet[2703]: E0514 01:42:57.247291 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:42:59.491621 containerd[1483]: time="2025-05-14T01:42:59.491482914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"90632ccb888c7928c27bb3e78809bb5b112ddfb767bd158490a5a78832670157\" pid:6362 exited_at:{seconds:1747186979 nanos:491209645}" May 14 01:42:59.554208 containerd[1483]: time="2025-05-14T01:42:59.554040246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"a82a8384ed1c4b238b47a95bfd3c9f037fbb97520082ee2055115b6a901eb1d2\" pid:6381 exited_at:{seconds:1747186979 nanos:553463808}" May 14 01:43:02.248138 kubelet[2703]: E0514 01:43:02.247948 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:07.248541 kubelet[2703]: E0514 01:43:07.248455 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:12.249416 kubelet[2703]: E0514 01:43:12.249332 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:17.250253 kubelet[2703]: E0514 01:43:17.250157 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:22.250567 kubelet[2703]: E0514 01:43:22.250462 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:27.251034 kubelet[2703]: E0514 01:43:27.250665 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:29.505334 containerd[1483]: time="2025-05-14T01:43:29.505035017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"ae7c62e878690539999b3559ef1c9c94258050e848f75208ea57e3f2a2bb116f\" pid:6409 exited_at:{seconds:1747187009 nanos:504596310}" May 14 01:43:29.565256 containerd[1483]: time="2025-05-14T01:43:29.564733502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"bd2eb03fc65df0ec249efa611c9cd40945b30432d533b9651df1eecb66ecd02c\" pid:6430 exited_at:{seconds:1747187009 nanos:564156250}" May 14 01:43:32.251825 kubelet[2703]: E0514 01:43:32.251453 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:37.251869 kubelet[2703]: E0514 01:43:37.251773 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:39.943285 kubelet[2703]: E0514 01:43:39.943048 2703 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:43:39.943285 kubelet[2703]: E0514 01:43:39.943211 2703 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 01:43:42.252109 kubelet[2703]: E0514 01:43:42.251995 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:47.252669 kubelet[2703]: E0514 01:43:47.252458 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:52.253723 kubelet[2703]: E0514 01:43:52.253636 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:56.211403 containerd[1483]: time="2025-05-14T01:43:56.211335465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"3c2c82c7146e66f62e5d0c3fcb0bb3536fa43784f6997e872d309bc8b7be21db\" pid:6460 exited_at:{seconds:1747187036 nanos:210782978}" May 14 01:43:57.254506 kubelet[2703]: E0514 01:43:57.254348 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:43:59.454135 containerd[1483]: time="2025-05-14T01:43:59.454086502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"ce8a0797b1bc32b6974897d52c25e96f3d1075e97c84bbcaeb6eaab3f6660b93\" pid:6491 exited_at:{seconds:1747187039 nanos:453187654}" May 14 01:43:59.511641 containerd[1483]: time="2025-05-14T01:43:59.511570290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"0ce190d3073ece3ee3854a097f6fdb4900f257de6499e28a276b92c9dd91952c\" pid:6512 exited_at:{seconds:1747187039 nanos:511120029}" May 14 01:44:02.255003 kubelet[2703]: E0514 01:44:02.254922 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:07.255576 kubelet[2703]: E0514 01:44:07.255483 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:12.255804 kubelet[2703]: E0514 01:44:12.255703 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:17.257002 kubelet[2703]: E0514 01:44:17.256854 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:22.257703 kubelet[2703]: E0514 01:44:22.257589 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:27.258188 kubelet[2703]: E0514 01:44:27.258030 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:29.520638 containerd[1483]: time="2025-05-14T01:44:29.520485595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"68c42e6b0e36bce28ac1538803b015b044ac5cf0adbb7a722e07b6c66d148283\" pid:6553 exit_status:1 exited_at:{seconds:1747187069 nanos:519715940}" May 14 01:44:29.575199 containerd[1483]: time="2025-05-14T01:44:29.575146194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"6ae0ef8dacf09a3ec3fbb29e36702bf37637ed5a6e94464ea086f6c0974b5571\" pid:6570 exited_at:{seconds:1747187069 nanos:574506879}" May 14 01:44:32.258848 kubelet[2703]: E0514 01:44:32.258759 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:37.259122 kubelet[2703]: E0514 01:44:37.258973 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:42.259628 kubelet[2703]: E0514 01:44:42.259521 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:47.260603 kubelet[2703]: E0514 01:44:47.260494 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:52.261307 kubelet[2703]: E0514 01:44:52.261208 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:56.184366 containerd[1483]: time="2025-05-14T01:44:56.184260432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"1d688e16f34159bff10019bc7114e93cfada8a3d467cea8a0b910da1f54102a9\" pid:6601 exit_status:1 exited_at:{seconds:1747187096 nanos:181658859}" May 14 01:44:57.261601 kubelet[2703]: E0514 01:44:57.261462 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:44:59.490739 containerd[1483]: time="2025-05-14T01:44:59.490653307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a8144defce98c4992c5647a5c40209cea423ac7f0ac7691da9927f6620d6613\" id:\"e8c341c850705f57e1cb31d645513b9144204279f52a90c212208ec49df10ac8\" pid:6624 exit_status:1 exited_at:{seconds:1747187099 nanos:490308815}" May 14 01:44:59.553663 containerd[1483]: time="2025-05-14T01:44:59.553552709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d86be2665603c87c4b2884d1e2a0a754df37c50ba7352d793d22e9048bee46\" id:\"1f7cc6d5832054b5bee3f9d314cf56607813a4362dcfcda7991ac4dfb53618b7\" pid:6640 exited_at:{seconds:1747187099 nanos:553144074}" May 14 01:45:02.261927 kubelet[2703]: E0514 01:45:02.261736 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" May 14 01:45:07.262985 kubelet[2703]: E0514 01:45:07.262905 2703 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down"