Jul 11 07:55:22.987396 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 03:36:05 -00 2025 Jul 11 07:55:22.987424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:55:22.987435 kernel: BIOS-provided physical RAM map: Jul 11 07:55:22.987444 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 07:55:22.987452 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 07:55:22.987459 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 07:55:22.987468 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 11 07:55:22.987476 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 11 07:55:22.987483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 07:55:22.987491 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 07:55:22.987499 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 11 07:55:22.987507 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 07:55:22.987517 kernel: NX (Execute Disable) protection: active Jul 11 07:55:22.987525 kernel: APIC: Static calls initialized Jul 11 07:55:22.987534 kernel: SMBIOS 3.0.0 present. Jul 11 07:55:22.987542 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 11 07:55:22.987550 kernel: DMI: Memory slots populated: 1/1 Jul 11 07:55:22.987559 kernel: Hypervisor detected: KVM Jul 11 07:55:22.987567 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 07:55:22.987575 kernel: kvm-clock: using sched offset of 5441192404 cycles Jul 11 07:55:22.987584 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 07:55:22.987593 kernel: tsc: Detected 1996.249 MHz processor Jul 11 07:55:22.987602 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 07:55:22.987611 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 07:55:22.987619 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 11 07:55:22.987628 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 07:55:22.987639 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 07:55:22.987647 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 11 07:55:22.987656 kernel: ACPI: Early table checksum verification disabled Jul 11 07:55:22.987664 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 11 07:55:22.987672 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:55:22.987681 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:55:22.987690 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:55:22.987698 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 11 07:55:22.987707 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:55:22.987717 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:55:22.987725 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 11 07:55:22.987733 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 11 07:55:22.987742 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 11 07:55:22.987750 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 11 07:55:22.987761 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 11 07:55:22.987770 kernel: No NUMA configuration found Jul 11 07:55:22.987780 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 11 07:55:22.987789 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 11 07:55:22.987798 kernel: Zone ranges: Jul 11 07:55:22.987825 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 07:55:22.987834 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 11 07:55:22.987843 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 11 07:55:22.987851 kernel: Device empty Jul 11 07:55:22.987860 kernel: Movable zone start for each node Jul 11 07:55:22.987871 kernel: Early memory node ranges Jul 11 07:55:22.987879 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 07:55:22.987888 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 11 07:55:22.987896 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 11 07:55:22.987905 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 11 07:55:22.987914 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 07:55:22.987923 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 07:55:22.987931 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 11 07:55:22.987940 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 07:55:22.987950 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 07:55:22.987959 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 07:55:22.987968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 07:55:22.987976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 07:55:22.987985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 07:55:22.987994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 07:55:22.988002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 07:55:22.988011 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 07:55:22.988020 kernel: CPU topo: Max. logical packages: 2 Jul 11 07:55:22.988030 kernel: CPU topo: Max. logical dies: 2 Jul 11 07:55:22.988039 kernel: CPU topo: Max. dies per package: 1 Jul 11 07:55:22.988047 kernel: CPU topo: Max. threads per core: 1 Jul 11 07:55:22.988056 kernel: CPU topo: Num. cores per package: 1 Jul 11 07:55:22.988064 kernel: CPU topo: Num. threads per package: 1 Jul 11 07:55:22.988073 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 11 07:55:22.988082 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 07:55:22.988091 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 11 07:55:22.988099 kernel: Booting paravirtualized kernel on KVM Jul 11 07:55:22.988110 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 07:55:22.988119 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 11 07:55:22.988128 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 11 07:55:22.988137 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 11 07:55:22.988145 kernel: pcpu-alloc: [0] 0 1 Jul 11 07:55:22.988153 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 11 07:55:22.988163 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:55:22.988173 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 07:55:22.988183 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 07:55:22.988192 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 07:55:22.988200 kernel: Fallback order for Node 0: 0 Jul 11 07:55:22.988209 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 11 07:55:22.988217 kernel: Policy zone: Normal Jul 11 07:55:22.988226 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 07:55:22.988235 kernel: software IO TLB: area num 2. Jul 11 07:55:22.988243 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 11 07:55:22.988252 kernel: ftrace: allocating 40097 entries in 157 pages Jul 11 07:55:22.988262 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 07:55:22.988270 kernel: Dynamic Preempt: voluntary Jul 11 07:55:22.988279 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 07:55:22.988288 kernel: rcu: RCU event tracing is enabled. Jul 11 07:55:22.988297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 11 07:55:22.988306 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 07:55:22.988315 kernel: Rude variant of Tasks RCU enabled. Jul 11 07:55:22.988323 kernel: Tracing variant of Tasks RCU enabled. Jul 11 07:55:22.988332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 07:55:22.988343 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 11 07:55:22.988352 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:55:22.988361 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:55:22.988369 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:55:22.988378 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 11 07:55:22.988387 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 07:55:22.988395 kernel: Console: colour VGA+ 80x25 Jul 11 07:55:22.988404 kernel: printk: legacy console [tty0] enabled Jul 11 07:55:22.988413 kernel: printk: legacy console [ttyS0] enabled Jul 11 07:55:22.988423 kernel: ACPI: Core revision 20240827 Jul 11 07:55:22.988432 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 07:55:22.988440 kernel: x2apic enabled Jul 11 07:55:22.988449 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 07:55:22.988457 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 07:55:22.988466 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 07:55:22.988481 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 11 07:55:22.988492 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 11 07:55:22.988501 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 11 07:55:22.988510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 07:55:22.988519 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 07:55:22.988529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 07:55:22.988540 kernel: Speculative Store Bypass: Vulnerable Jul 11 07:55:22.988549 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 11 07:55:22.988558 kernel: Freeing SMP alternatives memory: 32K Jul 11 07:55:22.988567 kernel: pid_max: default: 32768 minimum: 301 Jul 11 07:55:22.988576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 07:55:22.988586 kernel: landlock: Up and running. Jul 11 07:55:22.988595 kernel: SELinux: Initializing. Jul 11 07:55:22.988604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 07:55:22.988613 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 07:55:22.988623 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 11 07:55:22.988632 kernel: Performance Events: AMD PMU driver. Jul 11 07:55:22.988641 kernel: ... version: 0 Jul 11 07:55:22.988650 kernel: ... bit width: 48 Jul 11 07:55:22.988658 kernel: ... generic registers: 4 Jul 11 07:55:22.988669 kernel: ... value mask: 0000ffffffffffff Jul 11 07:55:22.988678 kernel: ... max period: 00007fffffffffff Jul 11 07:55:22.988687 kernel: ... fixed-purpose events: 0 Jul 11 07:55:22.988696 kernel: ... event mask: 000000000000000f Jul 11 07:55:22.988871 kernel: signal: max sigframe size: 1440 Jul 11 07:55:22.988885 kernel: rcu: Hierarchical SRCU implementation. Jul 11 07:55:22.988894 kernel: rcu: Max phase no-delay instances is 400. Jul 11 07:55:22.988903 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 07:55:22.988912 kernel: smp: Bringing up secondary CPUs ... Jul 11 07:55:22.988925 kernel: smpboot: x86: Booting SMP configuration: Jul 11 07:55:22.988934 kernel: .... node #0, CPUs: #1 Jul 11 07:55:22.988943 kernel: smp: Brought up 1 node, 2 CPUs Jul 11 07:55:22.988952 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 11 07:55:22.988962 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54620K init, 2348K bss, 227296K reserved, 0K cma-reserved) Jul 11 07:55:22.988971 kernel: devtmpfs: initialized Jul 11 07:55:22.988980 kernel: x86/mm: Memory block size: 128MB Jul 11 07:55:22.988989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 07:55:22.988998 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 11 07:55:22.989009 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 07:55:22.989018 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 07:55:22.989027 kernel: audit: initializing netlink subsys (disabled) Jul 11 07:55:22.989036 kernel: audit: type=2000 audit(1752220519.231:1): state=initialized audit_enabled=0 res=1 Jul 11 07:55:22.989045 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 07:55:22.989054 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 07:55:22.989063 kernel: cpuidle: using governor menu Jul 11 07:55:22.989072 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 07:55:22.989081 kernel: dca service started, version 1.12.1 Jul 11 07:55:22.989092 kernel: PCI: Using configuration type 1 for base access Jul 11 07:55:22.989102 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 07:55:22.989111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 07:55:22.989120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 07:55:22.989129 kernel: ACPI: Added _OSI(Module Device) Jul 11 07:55:22.989138 kernel: ACPI: Added _OSI(Processor Device) Jul 11 07:55:22.989147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 07:55:22.989156 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 07:55:22.989165 kernel: ACPI: Interpreter enabled Jul 11 07:55:22.989176 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 07:55:22.989185 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 07:55:22.989194 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 07:55:22.989203 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 07:55:22.989212 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 11 07:55:22.989221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 07:55:22.989367 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 11 07:55:22.989480 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 11 07:55:22.989571 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 11 07:55:22.989585 kernel: acpiphp: Slot [3] registered Jul 11 07:55:22.989594 kernel: acpiphp: Slot [4] registered Jul 11 07:55:22.989603 kernel: acpiphp: Slot [5] registered Jul 11 07:55:22.989612 kernel: acpiphp: Slot [6] registered Jul 11 07:55:22.989621 kernel: acpiphp: Slot [7] registered Jul 11 07:55:22.989630 kernel: acpiphp: Slot [8] registered Jul 11 07:55:22.989639 kernel: acpiphp: Slot [9] registered Jul 11 07:55:22.989651 kernel: acpiphp: Slot [10] registered Jul 11 07:55:22.989660 kernel: acpiphp: Slot [11] registered Jul 11 07:55:22.989669 kernel: acpiphp: Slot [12] registered Jul 11 07:55:22.989678 kernel: acpiphp: Slot [13] registered Jul 11 07:55:22.989687 kernel: acpiphp: Slot [14] registered Jul 11 07:55:22.989696 kernel: acpiphp: Slot [15] registered Jul 11 07:55:22.989705 kernel: acpiphp: Slot [16] registered Jul 11 07:55:22.989713 kernel: acpiphp: Slot [17] registered Jul 11 07:55:22.989722 kernel: acpiphp: Slot [18] registered Jul 11 07:55:22.989731 kernel: acpiphp: Slot [19] registered Jul 11 07:55:22.989742 kernel: acpiphp: Slot [20] registered Jul 11 07:55:22.989750 kernel: acpiphp: Slot [21] registered Jul 11 07:55:22.989760 kernel: acpiphp: Slot [22] registered Jul 11 07:55:22.989768 kernel: acpiphp: Slot [23] registered Jul 11 07:55:22.989777 kernel: acpiphp: Slot [24] registered Jul 11 07:55:22.989786 kernel: acpiphp: Slot [25] registered Jul 11 07:55:22.989795 kernel: acpiphp: Slot [26] registered Jul 11 07:55:22.991915 kernel: acpiphp: Slot [27] registered Jul 11 07:55:22.991929 kernel: acpiphp: Slot [28] registered Jul 11 07:55:22.991942 kernel: acpiphp: Slot [29] registered Jul 11 07:55:22.991951 kernel: acpiphp: Slot [30] registered Jul 11 07:55:22.991961 kernel: acpiphp: Slot [31] registered Jul 11 07:55:22.991970 kernel: PCI host bridge to bus 0000:00 Jul 11 07:55:22.992090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 07:55:22.992172 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 07:55:22.992251 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 07:55:22.992327 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 07:55:22.992407 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 11 07:55:22.992482 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 07:55:22.992585 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 11 07:55:22.992690 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 11 07:55:22.992792 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 11 07:55:22.992914 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 11 07:55:22.993006 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 11 07:55:22.993092 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 11 07:55:22.993178 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 11 07:55:22.993263 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 11 07:55:22.993358 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 11 07:55:22.993466 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 11 07:55:22.993554 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 11 07:55:22.993655 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 11 07:55:22.993771 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 11 07:55:22.993879 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 11 07:55:22.993968 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 11 07:55:22.994055 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 11 07:55:22.994145 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 07:55:22.994248 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 07:55:22.994337 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 11 07:55:22.994424 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 11 07:55:22.994510 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 11 07:55:22.994596 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 11 07:55:22.994695 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 07:55:22.994785 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 11 07:55:22.994899 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 11 07:55:22.994987 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 11 07:55:22.995083 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 07:55:22.995173 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 11 07:55:22.995261 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 11 07:55:22.995356 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 07:55:22.995450 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 11 07:55:22.995561 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 11 07:55:22.995649 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 11 07:55:22.995663 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 07:55:22.995673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 07:55:22.995682 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 07:55:22.995691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 07:55:22.995700 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 11 07:55:22.995710 kernel: iommu: Default domain type: Translated Jul 11 07:55:22.995719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 07:55:22.995732 kernel: PCI: Using ACPI for IRQ routing Jul 11 07:55:22.995742 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 07:55:22.995751 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 07:55:22.995760 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 11 07:55:22.995882 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 11 07:55:22.995972 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 11 07:55:22.996058 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 07:55:22.996071 kernel: vgaarb: loaded Jul 11 07:55:22.996085 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 07:55:22.996094 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 07:55:22.996103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 07:55:22.996112 kernel: pnp: PnP ACPI init Jul 11 07:55:22.996210 kernel: pnp 00:03: [dma 2] Jul 11 07:55:22.996226 kernel: pnp: PnP ACPI: found 5 devices Jul 11 07:55:22.996235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 07:55:22.996245 kernel: NET: Registered PF_INET protocol family Jul 11 07:55:22.996254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 07:55:22.996267 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 07:55:22.996276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 07:55:22.996286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 07:55:22.996295 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 07:55:22.996304 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 07:55:22.996314 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 07:55:22.996323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 07:55:22.996333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 07:55:22.996342 kernel: NET: Registered PF_XDP protocol family Jul 11 07:55:22.996423 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 07:55:22.996502 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 07:55:22.996578 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 07:55:22.996653 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 11 07:55:22.996728 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 11 07:55:22.996838 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 11 07:55:22.996929 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 11 07:55:22.996947 kernel: PCI: CLS 0 bytes, default 64 Jul 11 07:55:22.996956 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 11 07:55:22.996966 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 11 07:55:22.996975 kernel: Initialise system trusted keyrings Jul 11 07:55:22.996984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 07:55:22.996993 kernel: Key type asymmetric registered Jul 11 07:55:22.997002 kernel: Asymmetric key parser 'x509' registered Jul 11 07:55:22.997011 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 07:55:22.997020 kernel: io scheduler mq-deadline registered Jul 11 07:55:22.997031 kernel: io scheduler kyber registered Jul 11 07:55:22.997040 kernel: io scheduler bfq registered Jul 11 07:55:22.997050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 07:55:22.997060 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 11 07:55:22.997069 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 11 07:55:22.997078 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 11 07:55:22.997087 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 11 07:55:22.997097 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 07:55:22.997106 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 07:55:22.997117 kernel: random: crng init done Jul 11 07:55:22.997126 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 07:55:22.997135 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 07:55:22.997145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 07:55:22.997154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 07:55:22.997242 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 07:55:22.997324 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 07:55:22.997418 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T07:55:22 UTC (1752220522) Jul 11 07:55:22.997503 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 11 07:55:22.997516 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 07:55:22.997526 kernel: NET: Registered PF_INET6 protocol family Jul 11 07:55:22.997536 kernel: Segment Routing with IPv6 Jul 11 07:55:22.997545 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 07:55:22.997554 kernel: NET: Registered PF_PACKET protocol family Jul 11 07:55:22.997563 kernel: Key type dns_resolver registered Jul 11 07:55:22.997572 kernel: IPI shorthand broadcast: enabled Jul 11 07:55:22.997581 kernel: sched_clock: Marking stable (3791006975, 186631039)->(4011200595, -33562581) Jul 11 07:55:22.997593 kernel: registered taskstats version 1 Jul 11 07:55:22.997602 kernel: Loading compiled-in X.509 certificates Jul 11 07:55:22.997611 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 9703a4b3d6547675037b9597aa24472a5380cc2e' Jul 11 07:55:22.997620 kernel: Demotion targets for Node 0: null Jul 11 07:55:22.997630 kernel: Key type .fscrypt registered Jul 11 07:55:22.997639 kernel: Key type fscrypt-provisioning registered Jul 11 07:55:22.997648 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 07:55:22.997657 kernel: ima: Allocated hash algorithm: sha1 Jul 11 07:55:22.997666 kernel: ima: No architecture policies found Jul 11 07:55:22.997677 kernel: clk: Disabling unused clocks Jul 11 07:55:22.997687 kernel: Warning: unable to open an initial console. Jul 11 07:55:22.997696 kernel: Freeing unused kernel image (initmem) memory: 54620K Jul 11 07:55:22.997705 kernel: Write protecting the kernel read-only data: 24576k Jul 11 07:55:22.997714 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 07:55:22.997724 kernel: Run /init as init process Jul 11 07:55:22.997733 kernel: with arguments: Jul 11 07:55:22.997742 kernel: /init Jul 11 07:55:22.997751 kernel: with environment: Jul 11 07:55:22.997761 kernel: HOME=/ Jul 11 07:55:22.997770 kernel: TERM=linux Jul 11 07:55:22.997779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 07:55:22.997790 systemd[1]: Successfully made /usr/ read-only. Jul 11 07:55:22.998903 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 07:55:22.998921 systemd[1]: Detected virtualization kvm. Jul 11 07:55:22.998932 systemd[1]: Detected architecture x86-64. Jul 11 07:55:22.998953 systemd[1]: Running in initrd. Jul 11 07:55:22.998965 systemd[1]: No hostname configured, using default hostname. Jul 11 07:55:22.998975 systemd[1]: Hostname set to . Jul 11 07:55:22.998985 systemd[1]: Initializing machine ID from VM UUID. Jul 11 07:55:22.998995 systemd[1]: Queued start job for default target initrd.target. Jul 11 07:55:22.999006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:55:22.999018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:55:22.999029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 07:55:22.999039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 07:55:22.999050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 07:55:22.999061 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 07:55:22.999072 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 07:55:22.999084 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 07:55:22.999094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:55:22.999104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:55:22.999114 systemd[1]: Reached target paths.target - Path Units. Jul 11 07:55:22.999124 systemd[1]: Reached target slices.target - Slice Units. Jul 11 07:55:22.999134 systemd[1]: Reached target swap.target - Swaps. Jul 11 07:55:22.999146 systemd[1]: Reached target timers.target - Timer Units. Jul 11 07:55:22.999156 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 07:55:22.999166 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 07:55:22.999179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 07:55:22.999190 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 07:55:22.999200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:55:22.999211 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 07:55:22.999221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:55:22.999231 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 07:55:22.999241 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 07:55:22.999251 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 07:55:22.999262 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 07:55:22.999274 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 07:55:22.999284 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 07:55:22.999296 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 07:55:22.999306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 07:55:22.999342 systemd-journald[214]: Collecting audit messages is disabled. Jul 11 07:55:22.999370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:55:22.999382 systemd-journald[214]: Journal started Jul 11 07:55:22.999408 systemd-journald[214]: Runtime Journal (/run/log/journal/9e3ab58896604e5e83f2df38d102fd8b) is 8M, max 78.5M, 70.5M free. Jul 11 07:55:23.006854 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 07:55:23.012585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 07:55:23.016957 systemd-modules-load[216]: Inserted module 'overlay' Jul 11 07:55:23.018963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:55:23.021097 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 07:55:23.030057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 07:55:23.034206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 07:55:23.054043 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 07:55:23.058681 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 11 07:55:23.102226 kernel: Bridge firewalling registered Jul 11 07:55:23.060169 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 07:55:23.062919 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 07:55:23.103317 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 07:55:23.104796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:23.106639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:55:23.111918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 07:55:23.120167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 07:55:23.125515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 07:55:23.132365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:55:23.136946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 07:55:23.147711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:55:23.163305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 07:55:23.166946 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 07:55:23.190018 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:55:23.213441 systemd-resolved[240]: Positive Trust Anchors: Jul 11 07:55:23.213457 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 07:55:23.213498 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 07:55:23.220070 systemd-resolved[240]: Defaulting to hostname 'linux'. Jul 11 07:55:23.221048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 07:55:23.221948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:55:23.283870 kernel: SCSI subsystem initialized Jul 11 07:55:23.293854 kernel: Loading iSCSI transport class v2.0-870. Jul 11 07:55:23.306882 kernel: iscsi: registered transport (tcp) Jul 11 07:55:23.328906 kernel: iscsi: registered transport (qla4xxx) Jul 11 07:55:23.328968 kernel: QLogic iSCSI HBA Driver Jul 11 07:55:23.360550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 07:55:23.372117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:55:23.373436 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 07:55:23.449156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 07:55:23.454508 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 07:55:23.513918 kernel: raid6: sse2x4 gen() 12755 MB/s Jul 11 07:55:23.531878 kernel: raid6: sse2x2 gen() 14068 MB/s Jul 11 07:55:23.550275 kernel: raid6: sse2x1 gen() 8699 MB/s Jul 11 07:55:23.550356 kernel: raid6: using algorithm sse2x2 gen() 14068 MB/s Jul 11 07:55:23.569285 kernel: raid6: .... xor() 9363 MB/s, rmw enabled Jul 11 07:55:23.569353 kernel: raid6: using ssse3x2 recovery algorithm Jul 11 07:55:23.590905 kernel: xor: measuring software checksum speed Jul 11 07:55:23.593339 kernel: prefetch64-sse : 16992 MB/sec Jul 11 07:55:23.593404 kernel: generic_sse : 16854 MB/sec Jul 11 07:55:23.593435 kernel: xor: using function: prefetch64-sse (16992 MB/sec) Jul 11 07:55:23.795873 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 07:55:23.805347 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 07:55:23.809897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:55:23.866081 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 11 07:55:23.879224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:55:23.886527 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 07:55:23.929960 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 11 07:55:23.966534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 07:55:23.971394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 07:55:24.054157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:55:24.060993 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 07:55:24.134831 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 11 07:55:24.144146 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 11 07:55:24.171598 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 07:55:24.171654 kernel: GPT:17805311 != 20971519 Jul 11 07:55:24.171666 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 07:55:24.171684 kernel: GPT:17805311 != 20971519 Jul 11 07:55:24.171695 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 07:55:24.171707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:55:24.174489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:55:24.175423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:24.177660 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:55:24.182323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:55:24.183169 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:55:24.192827 kernel: libata version 3.00 loaded. Jul 11 07:55:24.195980 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 11 07:55:24.198915 kernel: scsi host0: ata_piix Jul 11 07:55:24.201610 kernel: scsi host1: ata_piix Jul 11 07:55:24.201751 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 11 07:55:24.205717 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 11 07:55:24.216821 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 07:55:24.270391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 07:55:24.283408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:24.295306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 07:55:24.304745 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 07:55:24.305359 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 07:55:24.316479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 07:55:24.318919 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 07:55:24.352912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:55:24.353335 disk-uuid[557]: Primary Header is updated. Jul 11 07:55:24.353335 disk-uuid[557]: Secondary Entries is updated. Jul 11 07:55:24.353335 disk-uuid[557]: Secondary Header is updated. Jul 11 07:55:24.503934 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 07:55:24.510379 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 07:55:24.510984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:55:24.512220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 07:55:24.514339 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 07:55:24.533956 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 07:55:25.390943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:55:25.391995 disk-uuid[558]: The operation has completed successfully. Jul 11 07:55:25.468837 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 07:55:25.469704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 07:55:25.511307 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 07:55:25.547642 sh[582]: Success Jul 11 07:55:25.598910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 07:55:25.599000 kernel: device-mapper: uevent: version 1.0.3 Jul 11 07:55:25.602125 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 07:55:25.626888 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 11 07:55:25.696602 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 07:55:25.701975 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 07:55:25.704876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 07:55:25.753246 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 07:55:25.753350 kernel: BTRFS: device fsid 5947ac9d-360e-47c3-9a17-c6b228910c06 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (594) Jul 11 07:55:25.767487 kernel: BTRFS info (device dm-0): first mount of filesystem 5947ac9d-360e-47c3-9a17-c6b228910c06 Jul 11 07:55:25.767553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:55:25.774300 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 07:55:25.794379 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 07:55:25.796512 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 07:55:25.798545 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 07:55:25.801073 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 07:55:25.807081 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 07:55:25.854870 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (629) Jul 11 07:55:25.864311 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:55:25.864337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:55:25.864356 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:55:25.875827 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:55:25.876583 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 07:55:25.878932 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 07:55:25.940761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 07:55:25.944151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 07:55:25.997628 systemd-networkd[764]: lo: Link UP Jul 11 07:55:25.997637 systemd-networkd[764]: lo: Gained carrier Jul 11 07:55:25.998744 systemd-networkd[764]: Enumeration completed Jul 11 07:55:25.999459 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 07:55:25.999893 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:55:25.999898 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 07:55:26.001098 systemd-networkd[764]: eth0: Link UP Jul 11 07:55:26.001102 systemd-networkd[764]: eth0: Gained carrier Jul 11 07:55:26.001111 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:55:26.001674 systemd[1]: Reached target network.target - Network. Jul 11 07:55:26.018882 systemd-networkd[764]: eth0: DHCPv4 address 172.24.4.10/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 11 07:55:26.089079 ignition[682]: Ignition 2.21.0 Jul 11 07:55:26.089876 ignition[682]: Stage: fetch-offline Jul 11 07:55:26.089926 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:26.089936 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:26.092195 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 07:55:26.090018 ignition[682]: parsed url from cmdline: "" Jul 11 07:55:26.094191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 11 07:55:26.090022 ignition[682]: no config URL provided Jul 11 07:55:26.090028 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 07:55:26.090035 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jul 11 07:55:26.090039 ignition[682]: failed to fetch config: resource requires networking Jul 11 07:55:26.090219 ignition[682]: Ignition finished successfully Jul 11 07:55:26.139313 ignition[775]: Ignition 2.21.0 Jul 11 07:55:26.139341 ignition[775]: Stage: fetch Jul 11 07:55:26.142054 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:26.142103 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:26.142355 ignition[775]: parsed url from cmdline: "" Jul 11 07:55:26.142372 ignition[775]: no config URL provided Jul 11 07:55:26.142383 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 07:55:26.142397 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jul 11 07:55:26.142551 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 11 07:55:26.145284 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 11 07:55:26.145317 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 11 07:55:26.557525 ignition[775]: GET result: OK Jul 11 07:55:26.557714 ignition[775]: parsing config with SHA512: 8fdb8625f2cc2817731bb6962f8e83b0f00b15d0c43b0523eb1580563813d66c1ef01f073236aaff5e301543ce869df235e3bf0b72a23327ebc71a2ab0f435c4 Jul 11 07:55:26.569202 unknown[775]: fetched base config from "system" Jul 11 07:55:26.569236 unknown[775]: fetched base config from "system" Jul 11 07:55:26.569250 unknown[775]: fetched user config from "openstack" Jul 11 07:55:26.570034 ignition[775]: fetch: fetch complete Jul 11 07:55:26.570047 ignition[775]: fetch: fetch passed Jul 11 07:55:26.570146 ignition[775]: Ignition finished successfully Jul 11 07:55:26.574999 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 11 07:55:26.579396 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 07:55:26.636734 ignition[782]: Ignition 2.21.0 Jul 11 07:55:26.636772 ignition[782]: Stage: kargs Jul 11 07:55:26.637208 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:26.637233 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:26.643754 ignition[782]: kargs: kargs passed Jul 11 07:55:26.643946 ignition[782]: Ignition finished successfully Jul 11 07:55:26.646989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 07:55:26.654507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 07:55:26.723574 ignition[788]: Ignition 2.21.0 Jul 11 07:55:26.723594 ignition[788]: Stage: disks Jul 11 07:55:26.723950 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:26.727380 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 07:55:26.723965 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:26.729680 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 07:55:26.725344 ignition[788]: disks: disks passed Jul 11 07:55:26.731887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 07:55:26.725433 ignition[788]: Ignition finished successfully Jul 11 07:55:26.733875 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 07:55:26.736045 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 07:55:26.738200 systemd[1]: Reached target basic.target - Basic System. Jul 11 07:55:26.743066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 07:55:26.808780 systemd-fsck[797]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 11 07:55:26.824163 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 07:55:26.831104 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 07:55:27.060882 kernel: EXT4-fs (vda9): mounted filesystem 68e263c6-913a-4fa8-894f-6e89b186e148 r/w with ordered data mode. Quota mode: none. Jul 11 07:55:27.064800 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 07:55:27.068483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 07:55:27.073291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 07:55:27.078057 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 07:55:27.081713 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 07:55:27.089185 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 11 07:55:27.100193 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 07:55:27.100347 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 07:55:27.144386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 07:55:27.149916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 07:55:27.162849 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (805) Jul 11 07:55:27.165831 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:55:27.171266 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:55:27.171309 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:55:27.195667 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 07:55:27.250879 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:27.258271 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 07:55:27.266022 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 11 07:55:27.275430 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 07:55:27.282990 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 07:55:27.422736 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 07:55:27.427589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 07:55:27.430002 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 07:55:27.460372 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 07:55:27.467704 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:55:27.502168 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 07:55:27.508856 ignition[923]: INFO : Ignition 2.21.0 Jul 11 07:55:27.510843 ignition[923]: INFO : Stage: mount Jul 11 07:55:27.510843 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:27.510843 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:27.512712 ignition[923]: INFO : mount: mount passed Jul 11 07:55:27.512712 ignition[923]: INFO : Ignition finished successfully Jul 11 07:55:27.513129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 07:55:28.010916 systemd-networkd[764]: eth0: Gained IPv6LL Jul 11 07:55:28.296885 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:30.320877 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:34.337889 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:34.351414 coreos-metadata[807]: Jul 11 07:55:34.351 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:55:34.397508 coreos-metadata[807]: Jul 11 07:55:34.397 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 11 07:55:34.414018 coreos-metadata[807]: Jul 11 07:55:34.413 INFO Fetch successful Jul 11 07:55:34.414018 coreos-metadata[807]: Jul 11 07:55:34.413 INFO wrote hostname ci-4392-0-0-n-91c7dbf1fc.novalocal to /sysroot/etc/hostname Jul 11 07:55:34.417865 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 11 07:55:34.418141 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 11 07:55:34.427396 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 07:55:34.462168 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 07:55:34.500881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (941) Jul 11 07:55:34.508305 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:55:34.508375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:55:34.512674 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:55:34.527053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 07:55:34.590859 ignition[959]: INFO : Ignition 2.21.0 Jul 11 07:55:34.590859 ignition[959]: INFO : Stage: files Jul 11 07:55:34.594280 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:34.594280 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:34.594280 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jul 11 07:55:34.594280 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 07:55:34.594280 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 07:55:34.603530 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 07:55:34.603530 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 07:55:34.603530 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 07:55:34.603530 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 07:55:34.603530 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 07:55:34.596574 unknown[959]: wrote ssh authorized keys file for user: core Jul 11 07:55:35.398794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 07:55:40.073654 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 07:55:40.083276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:55:40.102621 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 07:55:40.903993 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 07:55:42.748894 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:55:42.748894 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 07:55:42.754259 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 07:55:42.765311 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 07:55:42.765311 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 07:55:42.765311 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 11 07:55:42.765311 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 07:55:42.777159 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 07:55:42.777159 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 07:55:42.777159 ignition[959]: INFO : files: files passed Jul 11 07:55:42.777159 ignition[959]: INFO : Ignition finished successfully Jul 11 07:55:42.775399 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 07:55:42.787061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 07:55:42.791300 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 07:55:42.824459 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 07:55:42.824662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 07:55:42.839787 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:55:42.839787 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:55:42.844358 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:55:42.847486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 07:55:42.852491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 07:55:42.858342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 07:55:42.912758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 07:55:42.912884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 07:55:42.915304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 07:55:42.915911 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 07:55:42.917926 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 07:55:42.919927 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 07:55:42.947730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 07:55:42.952032 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 07:55:42.989456 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:55:42.991227 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:55:42.994525 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 07:55:42.997614 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 07:55:42.998179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 07:55:43.000938 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 07:55:43.002779 systemd[1]: Stopped target basic.target - Basic System. Jul 11 07:55:43.005965 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 07:55:43.008562 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 07:55:43.011431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 07:55:43.014351 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 07:55:43.017652 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 07:55:43.020507 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 07:55:43.023914 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 07:55:43.026728 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 07:55:43.030002 systemd[1]: Stopped target swap.target - Swaps. Jul 11 07:55:43.032596 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 07:55:43.033161 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 07:55:43.036147 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:55:43.038207 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:55:43.040624 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 07:55:43.041043 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:55:43.043746 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 07:55:43.044287 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 07:55:43.048043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 07:55:43.048376 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 07:55:43.050352 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 07:55:43.050649 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 07:55:43.056314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 07:55:43.058971 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 07:55:43.061139 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:55:43.070205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 07:55:43.072622 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 07:55:43.072877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:55:43.074672 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 07:55:43.074897 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 07:55:43.088103 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 07:55:43.089866 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 07:55:43.119833 ignition[1012]: INFO : Ignition 2.21.0 Jul 11 07:55:43.119833 ignition[1012]: INFO : Stage: umount Jul 11 07:55:43.122975 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:55:43.122975 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:55:43.122975 ignition[1012]: INFO : umount: umount passed Jul 11 07:55:43.122975 ignition[1012]: INFO : Ignition finished successfully Jul 11 07:55:43.123550 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 07:55:43.123703 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 07:55:43.127122 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 07:55:43.127727 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 07:55:43.128837 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 07:55:43.129501 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 07:55:43.129553 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 07:55:43.131939 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 11 07:55:43.131999 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 11 07:55:43.133055 systemd[1]: Stopped target network.target - Network. Jul 11 07:55:43.134119 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 07:55:43.134204 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 07:55:43.135170 systemd[1]: Stopped target paths.target - Path Units. Jul 11 07:55:43.136062 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 07:55:43.139860 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:55:43.140458 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 07:55:43.141728 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 07:55:43.142774 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 07:55:43.142851 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 07:55:43.143833 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 07:55:43.143883 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 07:55:43.144841 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 07:55:43.144899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 07:55:43.145905 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 07:55:43.145957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 07:55:43.147106 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 07:55:43.148332 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 07:55:43.151058 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 07:55:43.151166 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 07:55:43.152201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 07:55:43.152280 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 07:55:43.162636 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 07:55:43.163923 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 07:55:43.167845 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 07:55:43.168127 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 07:55:43.168349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 07:55:43.172474 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 07:55:43.174161 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 07:55:43.174758 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 07:55:43.174881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:55:43.176903 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 07:55:43.179125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 07:55:43.179178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 07:55:43.180267 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 07:55:43.180322 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:55:43.182949 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 07:55:43.183009 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 07:55:43.184149 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 07:55:43.184210 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:55:43.186734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:55:43.191107 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 07:55:43.191183 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:55:43.196322 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 07:55:43.197112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:55:43.199084 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 07:55:43.199890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 07:55:43.200429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 07:55:43.200463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:55:43.202244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 07:55:43.202318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 07:55:43.203921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 07:55:43.203982 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 07:55:43.205010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 07:55:43.205058 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 07:55:43.207127 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 07:55:43.210994 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 07:55:43.211054 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:55:43.213222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 07:55:43.213273 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:55:43.214895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:55:43.214972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:43.218435 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 07:55:43.218497 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 07:55:43.218545 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:55:43.218955 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 07:55:43.222640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 07:55:43.228569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 07:55:43.228701 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 07:55:43.230326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 07:55:43.232260 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 07:55:43.250587 systemd[1]: Switching root. Jul 11 07:55:43.290936 systemd-journald[214]: Journal stopped Jul 11 07:55:45.223163 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jul 11 07:55:45.223318 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 07:55:45.223358 kernel: SELinux: policy capability open_perms=1 Jul 11 07:55:45.223371 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 07:55:45.223384 kernel: SELinux: policy capability always_check_network=0 Jul 11 07:55:45.223396 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 07:55:45.223431 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 07:55:45.223444 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 07:55:45.223457 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 07:55:45.223468 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 07:55:45.223481 systemd[1]: Successfully loaded SELinux policy in 103.618ms. Jul 11 07:55:45.223513 kernel: audit: type=1403 audit(1752220544.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 07:55:45.223526 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.806ms. Jul 11 07:55:45.223541 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 07:55:45.223577 systemd[1]: Detected virtualization kvm. Jul 11 07:55:45.223591 systemd[1]: Detected architecture x86-64. Jul 11 07:55:45.223603 systemd[1]: Detected first boot. Jul 11 07:55:45.223616 systemd[1]: Hostname set to . Jul 11 07:55:45.223629 systemd[1]: Initializing machine ID from VM UUID. Jul 11 07:55:45.223648 zram_generator::config[1055]: No configuration found. Jul 11 07:55:45.223662 kernel: Guest personality initialized and is inactive Jul 11 07:55:45.223673 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 07:55:45.223708 kernel: Initialized host personality Jul 11 07:55:45.223722 kernel: NET: Registered PF_VSOCK protocol family Jul 11 07:55:45.223734 systemd[1]: Populated /etc with preset unit settings. Jul 11 07:55:45.223748 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 07:55:45.223761 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 07:55:45.223773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 07:55:45.225843 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 07:55:45.225862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 07:55:45.225875 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 07:55:45.225918 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 07:55:45.225932 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 07:55:45.225945 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 07:55:45.225958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 07:55:45.225971 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 07:55:45.225983 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 07:55:45.225996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:55:45.226009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:55:45.226029 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 07:55:45.226051 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 07:55:45.226064 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 07:55:45.226077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 07:55:45.226089 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 07:55:45.226102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:55:45.226121 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:55:45.226142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 07:55:45.226156 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 07:55:45.226169 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 07:55:45.226182 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 07:55:45.226194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:55:45.226208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 07:55:45.226221 systemd[1]: Reached target slices.target - Slice Units. Jul 11 07:55:45.226265 systemd[1]: Reached target swap.target - Swaps. Jul 11 07:55:45.226279 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 07:55:45.226314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 07:55:45.226329 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 07:55:45.226341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:55:45.226363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 07:55:45.226376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:55:45.226388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 07:55:45.226401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 07:55:45.226414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 07:55:45.226427 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 07:55:45.226460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:55:45.226474 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 07:55:45.226487 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 07:55:45.226500 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 07:55:45.226513 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 07:55:45.226526 systemd[1]: Reached target machines.target - Containers. Jul 11 07:55:45.226538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 07:55:45.226551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 07:55:45.226584 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 07:55:45.226597 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 07:55:45.226610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 07:55:45.226622 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 07:55:45.226635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 07:55:45.226649 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 07:55:45.226662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 07:55:45.226675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 07:55:45.226688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 07:55:45.226720 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 07:55:45.226734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 07:55:45.226747 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 07:55:45.226774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 07:55:45.226791 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 07:55:45.226821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 07:55:45.226835 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 07:55:45.226848 kernel: fuse: init (API version 7.41) Jul 11 07:55:45.226860 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 07:55:45.226900 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 07:55:45.226939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 07:55:45.226952 kernel: ACPI: bus type drm_connector registered Jul 11 07:55:45.226965 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 07:55:45.227006 systemd[1]: Stopped verity-setup.service. Jul 11 07:55:45.227040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:55:45.227055 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 07:55:45.227090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 07:55:45.227103 kernel: loop: module loaded Jul 11 07:55:45.227115 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 07:55:45.227149 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 07:55:45.227162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 07:55:45.227176 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 07:55:45.227189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:55:45.227202 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 07:55:45.227215 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 07:55:45.227228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 07:55:45.227241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 07:55:45.227278 systemd-journald[1138]: Collecting audit messages is disabled. Jul 11 07:55:45.227352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 07:55:45.227368 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 07:55:45.227381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 07:55:45.227395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 07:55:45.227409 systemd-journald[1138]: Journal started Jul 11 07:55:45.227455 systemd-journald[1138]: Runtime Journal (/run/log/journal/9e3ab58896604e5e83f2df38d102fd8b) is 8M, max 78.5M, 70.5M free. Jul 11 07:55:44.819514 systemd[1]: Queued start job for default target multi-user.target. Jul 11 07:55:44.833074 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 07:55:44.833794 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 07:55:45.233957 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 07:55:45.236580 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 07:55:45.236906 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 07:55:45.237747 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 07:55:45.238037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 07:55:45.238986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 07:55:45.239917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:55:45.240772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 07:55:45.241751 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 07:55:45.259140 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 07:55:45.263048 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 07:55:45.265647 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 07:55:45.267160 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 07:55:45.267191 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 07:55:45.271452 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 07:55:45.278953 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 07:55:45.280203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 07:55:45.283033 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 07:55:45.287040 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 07:55:45.287630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 07:55:45.293124 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 07:55:45.294519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 07:55:45.295990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 07:55:45.299391 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 07:55:45.302967 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 07:55:45.304223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:55:45.305216 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 07:55:45.306362 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 07:55:45.316975 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 07:55:45.327083 systemd-journald[1138]: Time spent on flushing to /var/log/journal/9e3ab58896604e5e83f2df38d102fd8b is 44.920ms for 972 entries. Jul 11 07:55:45.327083 systemd-journald[1138]: System Journal (/var/log/journal/9e3ab58896604e5e83f2df38d102fd8b) is 8M, max 584.8M, 576.8M free. Jul 11 07:55:45.436009 systemd-journald[1138]: Received client request to flush runtime journal. Jul 11 07:55:45.436120 kernel: loop0: detected capacity change from 0 to 114000 Jul 11 07:55:45.344567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 07:55:45.345478 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 07:55:45.349995 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 07:55:45.389875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:55:45.439566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 07:55:45.465862 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 07:55:45.474880 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 07:55:45.477548 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 07:55:45.483988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 07:55:45.507840 kernel: loop1: detected capacity change from 0 to 221472 Jul 11 07:55:45.539191 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jul 11 07:55:45.539211 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jul 11 07:55:45.546494 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:55:45.567848 kernel: loop2: detected capacity change from 0 to 146488 Jul 11 07:55:45.628880 kernel: loop3: detected capacity change from 0 to 8 Jul 11 07:55:45.660842 kernel: loop4: detected capacity change from 0 to 114000 Jul 11 07:55:45.694184 kernel: loop5: detected capacity change from 0 to 221472 Jul 11 07:55:45.730842 kernel: loop6: detected capacity change from 0 to 146488 Jul 11 07:55:45.786973 kernel: loop7: detected capacity change from 0 to 8 Jul 11 07:55:45.799937 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 11 07:55:45.802434 (sd-merge)[1217]: Merged extensions into '/usr'. Jul 11 07:55:45.815425 systemd[1]: Reload requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 07:55:45.815443 systemd[1]: Reloading... Jul 11 07:55:45.932985 zram_generator::config[1244]: No configuration found. Jul 11 07:55:46.182075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:55:46.301180 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 07:55:46.302090 systemd[1]: Reloading finished in 486 ms. Jul 11 07:55:46.318720 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 07:55:46.331944 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:55:46.332861 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 07:55:46.341784 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 07:55:46.343655 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 07:55:46.346397 systemd[1]: Starting ensure-sysext.service... Jul 11 07:55:46.351786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 07:55:46.374963 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... Jul 11 07:55:46.374980 systemd[1]: Reloading... Jul 11 07:55:46.383204 systemd-udevd[1300]: Using default interface naming scheme 'v255'. Jul 11 07:55:46.390973 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 07:55:46.391077 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 07:55:46.391399 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 07:55:46.391676 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 07:55:46.392513 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 07:55:46.392834 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jul 11 07:55:46.392901 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jul 11 07:55:46.400410 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 07:55:46.400420 systemd-tmpfiles[1304]: Skipping /boot Jul 11 07:55:46.414995 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 07:55:46.415008 systemd-tmpfiles[1304]: Skipping /boot Jul 11 07:55:46.519849 zram_generator::config[1360]: No configuration found. Jul 11 07:55:46.718895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:55:46.719931 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 07:55:46.766972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 07:55:46.770829 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 11 07:55:46.776895 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 07:55:46.782875 kernel: ACPI: button: Power Button [PWRF] Jul 11 07:55:46.878021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 07:55:46.879029 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 07:55:46.879245 systemd[1]: Reloading finished in 503 ms. Jul 11 07:55:46.888526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:55:46.898193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:55:46.948329 systemd[1]: Finished ensure-sysext.service. Jul 11 07:55:46.953756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:55:46.954915 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 07:55:46.960741 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 07:55:46.961540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 07:55:46.963772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 07:55:46.969027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 07:55:46.971678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 07:55:46.978479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 07:55:46.979294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 07:55:46.982195 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 07:55:46.983912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 07:55:46.985269 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 07:55:46.996079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 07:55:47.000612 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 07:55:47.009088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 07:55:47.013044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 07:55:47.019127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:55:47.019885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:55:47.038448 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 07:55:47.044136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 07:55:47.044337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 07:55:47.045029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 07:55:47.052081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 07:55:47.052289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 07:55:47.069425 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 07:55:47.069621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 07:55:47.070517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 07:55:47.070706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 07:55:47.071466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 07:55:47.082330 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 11 07:55:47.082415 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 11 07:55:47.084739 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 07:55:47.096265 kernel: Console: switching to colour dummy device 80x25 Jul 11 07:55:47.096342 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 11 07:55:47.096377 kernel: [drm] features: -context_init Jul 11 07:55:47.096396 kernel: [drm] number of scanouts: 1 Jul 11 07:55:47.096416 kernel: [drm] number of cap sets: 0 Jul 11 07:55:47.100345 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 11 07:55:47.105369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 07:55:47.113919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 07:55:47.116425 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 07:55:47.136959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:55:47.138914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:47.142635 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:55:47.147147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:55:47.148315 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 07:55:47.152315 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 07:55:47.166776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 07:55:47.169524 augenrules[1488]: No rules Jul 11 07:55:47.171255 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 07:55:47.171481 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 07:55:47.186846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 07:55:47.283691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:55:47.284596 systemd-networkd[1448]: lo: Link UP Jul 11 07:55:47.285035 systemd-networkd[1448]: lo: Gained carrier Jul 11 07:55:47.286606 systemd-networkd[1448]: Enumeration completed Jul 11 07:55:47.286766 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 07:55:47.288626 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:55:47.288707 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 07:55:47.290940 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 07:55:47.292431 systemd-networkd[1448]: eth0: Link UP Jul 11 07:55:47.292574 systemd-networkd[1448]: eth0: Gained carrier Jul 11 07:55:47.292594 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:55:47.293204 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 07:55:47.300910 systemd-networkd[1448]: eth0: DHCPv4 address 172.24.4.10/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 11 07:55:47.312621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 07:55:47.312884 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 07:55:47.323339 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 07:55:47.325624 systemd-resolved[1452]: Positive Trust Anchors: Jul 11 07:55:47.325648 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 07:55:47.325692 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 07:55:47.331893 systemd-resolved[1452]: Using system hostname 'ci-4392-0-0-n-91c7dbf1fc.novalocal'. Jul 11 07:55:47.333468 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 07:55:47.333659 systemd[1]: Reached target network.target - Network. Jul 11 07:55:47.333800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:55:47.333903 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 07:55:47.334037 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 07:55:47.334117 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 07:55:47.334195 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 07:55:47.334452 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 07:55:47.334588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 07:55:47.334647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 07:55:47.334702 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 07:55:47.334729 systemd[1]: Reached target paths.target - Path Units. Jul 11 07:55:47.334777 systemd[1]: Reached target timers.target - Timer Units. Jul 11 07:55:47.336842 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 07:55:47.338248 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 07:55:47.340789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 07:55:47.341024 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 07:55:47.341125 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 07:55:47.343593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 07:55:47.343969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 07:55:47.344745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 07:55:47.345551 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 07:55:47.345630 systemd[1]: Reached target basic.target - Basic System. Jul 11 07:55:47.345744 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 07:55:47.345777 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 07:55:47.346768 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 07:55:47.348039 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 11 07:55:47.351719 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 07:55:47.356502 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 07:55:47.359009 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 07:55:47.364016 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 07:55:47.364129 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 07:55:47.368658 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 07:55:47.368859 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:47.372561 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 07:55:47.374896 jq[1518]: false Jul 11 07:55:47.377062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 07:55:47.380892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 07:55:47.387008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 07:55:47.395774 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 07:55:47.397027 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 07:55:47.398525 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 07:55:47.401246 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 07:55:47.407066 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 07:55:47.417734 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 07:55:47.418261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 07:55:47.418459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 07:55:47.423447 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jul 11 07:55:47.421661 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jul 11 07:55:47.437753 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 07:55:47.438090 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 07:55:47.440104 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Jul 11 07:55:47.440169 oslogin_cache_refresh[1521]: Failure getting users, quitting Jul 11 07:55:47.440412 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 07:55:47.440412 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Jul 11 07:55:47.440195 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 07:55:47.440240 oslogin_cache_refresh[1521]: Refreshing group entry cache Jul 11 07:55:47.446607 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Jul 11 07:55:47.447862 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 07:55:47.447288 oslogin_cache_refresh[1521]: Failure getting groups, quitting Jul 11 07:55:47.447305 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 07:55:47.453136 update_engine[1531]: I20250711 07:55:47.449745 1531 main.cc:92] Flatcar Update Engine starting Jul 11 07:55:47.452653 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 07:55:47.452915 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 07:55:47.454078 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 07:55:47.454293 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 07:55:47.457411 jq[1532]: true Jul 11 07:55:47.459668 extend-filesystems[1519]: Found /dev/vda6 Jul 11 07:55:47.471126 extend-filesystems[1519]: Found /dev/vda9 Jul 11 07:55:47.479081 extend-filesystems[1519]: Checking size of /dev/vda9 Jul 11 07:55:47.481408 tar[1539]: linux-amd64/helm Jul 11 07:55:47.484305 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 07:55:47.914973 systemd-timesyncd[1455]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Jul 11 07:55:47.915025 systemd-timesyncd[1455]: Initial clock synchronization to Fri 2025-07-11 07:55:47.914851 UTC. Jul 11 07:55:47.915098 systemd-resolved[1452]: Clock change detected. Flushing caches. Jul 11 07:55:47.916155 jq[1552]: true Jul 11 07:55:47.942782 dbus-daemon[1516]: [system] SELinux support is enabled Jul 11 07:55:47.943197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 07:55:47.946710 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 07:55:47.949250 extend-filesystems[1519]: Resized partition /dev/vda9 Jul 11 07:55:47.946748 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 07:55:47.947157 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 07:55:47.947175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 07:55:47.959126 extend-filesystems[1564]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 07:55:47.962583 systemd[1]: Started update-engine.service - Update Engine. Jul 11 07:55:47.965853 update_engine[1531]: I20250711 07:55:47.963253 1531 update_check_scheduler.cc:74] Next update check in 2m34s Jul 11 07:55:47.970355 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 07:55:47.972219 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 11 07:55:47.982316 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 11 07:55:48.015108 systemd-logind[1529]: New seat seat0. Jul 11 07:55:48.026050 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 07:55:48.026099 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 07:55:48.026520 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 07:55:48.029889 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 07:55:48.029889 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 07:55:48.029889 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 11 07:55:48.033709 extend-filesystems[1519]: Resized filesystem in /dev/vda9 Jul 11 07:55:48.035185 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 07:55:48.035457 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 07:55:48.089214 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jul 11 07:55:48.089128 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 07:55:48.111479 systemd[1]: Starting sshkeys.service... Jul 11 07:55:48.210159 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 11 07:55:48.212019 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 11 07:55:48.233127 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:48.241316 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 07:55:48.410017 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 07:55:48.419953 containerd[1555]: time="2025-07-11T07:55:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 07:55:48.422526 containerd[1555]: time="2025-07-11T07:55:48.422469324Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 07:55:48.446538 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 07:55:48.450633 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 07:55:48.454042 containerd[1555]: time="2025-07-11T07:55:48.454003747Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.303µs" Jul 11 07:55:48.454214 containerd[1555]: time="2025-07-11T07:55:48.454193894Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 07:55:48.454293 containerd[1555]: time="2025-07-11T07:55:48.454276078Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 07:55:48.454509 containerd[1555]: time="2025-07-11T07:55:48.454489258Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 07:55:48.454588 containerd[1555]: time="2025-07-11T07:55:48.454572083Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 07:55:48.454668 containerd[1555]: time="2025-07-11T07:55:48.454652674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 07:55:48.454865 containerd[1555]: time="2025-07-11T07:55:48.454842630Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 07:55:48.454938 containerd[1555]: time="2025-07-11T07:55:48.454922350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455265 containerd[1555]: time="2025-07-11T07:55:48.455239785Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455330 containerd[1555]: time="2025-07-11T07:55:48.455315738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455392 containerd[1555]: time="2025-07-11T07:55:48.455376381Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455448 containerd[1555]: time="2025-07-11T07:55:48.455434731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455598 containerd[1555]: time="2025-07-11T07:55:48.455578641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455871 containerd[1555]: time="2025-07-11T07:55:48.455850280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 07:55:48.455957 containerd[1555]: time="2025-07-11T07:55:48.455938896Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 07:55:48.456016 containerd[1555]: time="2025-07-11T07:55:48.456002305Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 07:55:48.456128 containerd[1555]: time="2025-07-11T07:55:48.456108414Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 07:55:48.456495 containerd[1555]: time="2025-07-11T07:55:48.456473098Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 07:55:48.456690 containerd[1555]: time="2025-07-11T07:55:48.456670929Z" level=info msg="metadata content store policy set" policy=shared Jul 11 07:55:48.464687 containerd[1555]: time="2025-07-11T07:55:48.464621120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464793483Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464822277Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464835302Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464849067Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464875497Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464904852Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464919550Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464931903Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464949556Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464961518Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 07:55:48.465107 containerd[1555]: time="2025-07-11T07:55:48.464974673Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 07:55:48.466428 containerd[1555]: time="2025-07-11T07:55:48.466356625Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 07:55:48.466483 containerd[1555]: time="2025-07-11T07:55:48.466447966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 07:55:48.466512 containerd[1555]: time="2025-07-11T07:55:48.466493171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 07:55:48.466536 containerd[1555]: time="2025-07-11T07:55:48.466510534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 07:55:48.466536 containerd[1555]: time="2025-07-11T07:55:48.466525842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 07:55:48.466578 containerd[1555]: time="2025-07-11T07:55:48.466540019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 07:55:48.466637 containerd[1555]: time="2025-07-11T07:55:48.466576397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 07:55:48.466637 containerd[1555]: time="2025-07-11T07:55:48.466590684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 07:55:48.466637 containerd[1555]: time="2025-07-11T07:55:48.466606233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 07:55:48.466637 containerd[1555]: time="2025-07-11T07:55:48.466620860Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 07:55:48.466733 containerd[1555]: time="2025-07-11T07:55:48.466652229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 07:55:48.466784 containerd[1555]: time="2025-07-11T07:55:48.466760873Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 07:55:48.466820 containerd[1555]: time="2025-07-11T07:55:48.466784898Z" level=info msg="Start snapshots syncer" Jul 11 07:55:48.466853 containerd[1555]: time="2025-07-11T07:55:48.466833289Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 07:55:48.467328 containerd[1555]: time="2025-07-11T07:55:48.467277863Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 07:55:48.467455 containerd[1555]: time="2025-07-11T07:55:48.467364665Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 07:55:48.467483 containerd[1555]: time="2025-07-11T07:55:48.467459012Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 07:55:48.469160 containerd[1555]: time="2025-07-11T07:55:48.469132170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 07:55:48.469205 containerd[1555]: time="2025-07-11T07:55:48.469166655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.469181633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470145501Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470166530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470180346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470196236Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470222265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470234939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 07:55:48.470288 containerd[1555]: time="2025-07-11T07:55:48.470249827Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 07:55:48.470519 containerd[1555]: time="2025-07-11T07:55:48.470501128Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470568825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470583222Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470593812Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470602819Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470620121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470633286Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470654646Z" level=info msg="runtime interface created" Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470660507Z" level=info msg="created NRI interface" Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470669063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470685774Z" level=info msg="Connect containerd service" Jul 11 07:55:48.471330 containerd[1555]: time="2025-07-11T07:55:48.470720850Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 07:55:48.471980 containerd[1555]: time="2025-07-11T07:55:48.471954925Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 07:55:48.478374 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 07:55:48.478589 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 07:55:48.485557 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 07:55:48.540555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 07:55:48.546773 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 07:55:48.557065 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 07:55:48.557453 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 07:55:48.672859 tar[1539]: linux-amd64/LICENSE Jul 11 07:55:48.673297 tar[1539]: linux-amd64/README.md Jul 11 07:55:48.682118 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 07:55:48.685130 systemd[1]: Started sshd@0-172.24.4.10:22-172.24.4.1:39438.service - OpenSSH per-connection server daemon (172.24.4.1:39438). Jul 11 07:55:48.726136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 07:55:48.731454 containerd[1555]: time="2025-07-11T07:55:48.731392059Z" level=info msg="Start subscribing containerd event" Jul 11 07:55:48.731668 containerd[1555]: time="2025-07-11T07:55:48.731559122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 07:55:48.731713 containerd[1555]: time="2025-07-11T07:55:48.731690669Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 07:55:48.731744 containerd[1555]: time="2025-07-11T07:55:48.731636627Z" level=info msg="Start recovering state" Jul 11 07:55:48.731826 containerd[1555]: time="2025-07-11T07:55:48.731804031Z" level=info msg="Start event monitor" Jul 11 07:55:48.731826 containerd[1555]: time="2025-07-11T07:55:48.731824700Z" level=info msg="Start cni network conf syncer for default" Jul 11 07:55:48.731903 containerd[1555]: time="2025-07-11T07:55:48.731833316Z" level=info msg="Start streaming server" Jul 11 07:55:48.731903 containerd[1555]: time="2025-07-11T07:55:48.731843816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 07:55:48.731903 containerd[1555]: time="2025-07-11T07:55:48.731852342Z" level=info msg="runtime interface starting up..." Jul 11 07:55:48.731903 containerd[1555]: time="2025-07-11T07:55:48.731858794Z" level=info msg="starting plugins..." Jul 11 07:55:48.731903 containerd[1555]: time="2025-07-11T07:55:48.731872289Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 07:55:48.732019 containerd[1555]: time="2025-07-11T07:55:48.731985532Z" level=info msg="containerd successfully booted in 0.313802s" Jul 11 07:55:48.732210 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 07:55:48.828185 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:49.250190 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:49.636567 sshd[1629]: Accepted publickey for core from 172.24.4.1 port 39438 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:55:49.640793 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:55:49.678194 systemd-logind[1529]: New session 1 of user core. Jul 11 07:55:49.678431 systemd-networkd[1448]: eth0: Gained IPv6LL Jul 11 07:55:49.684914 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 07:55:49.688475 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 07:55:49.690638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 07:55:49.694351 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 07:55:49.700055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:55:49.712367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 07:55:49.742621 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 07:55:49.749958 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 07:55:49.766720 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 07:55:49.774800 systemd-logind[1529]: New session c1 of user core. Jul 11 07:55:49.778507 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 07:55:49.947621 systemd[1643]: Queued start job for default target default.target. Jul 11 07:55:49.956126 systemd[1643]: Created slice app.slice - User Application Slice. Jul 11 07:55:49.956153 systemd[1643]: Reached target paths.target - Paths. Jul 11 07:55:49.956195 systemd[1643]: Reached target timers.target - Timers. Jul 11 07:55:49.959188 systemd[1643]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 07:55:49.971300 systemd[1643]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 07:55:49.971477 systemd[1643]: Reached target sockets.target - Sockets. Jul 11 07:55:49.971622 systemd[1643]: Reached target basic.target - Basic System. Jul 11 07:55:49.971744 systemd[1643]: Reached target default.target - Main User Target. Jul 11 07:55:49.971854 systemd[1643]: Startup finished in 185ms. Jul 11 07:55:49.972227 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 07:55:49.984489 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 07:55:50.473907 systemd[1]: Started sshd@1-172.24.4.10:22-172.24.4.1:39450.service - OpenSSH per-connection server daemon (172.24.4.1:39450). Jul 11 07:55:50.850160 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:51.273614 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:51.700490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:55:51.719937 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:55:52.202249 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 39450 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:55:52.204760 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:55:52.224774 systemd-logind[1529]: New session 2 of user core. Jul 11 07:55:52.235687 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 07:55:52.838773 sshd[1676]: Connection closed by 172.24.4.1 port 39450 Jul 11 07:55:52.837053 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jul 11 07:55:52.856445 systemd[1]: sshd@1-172.24.4.10:22-172.24.4.1:39450.service: Deactivated successfully. Jul 11 07:55:52.860022 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 07:55:52.865146 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jul 11 07:55:52.867538 systemd[1]: Started sshd@2-172.24.4.10:22-172.24.4.1:39456.service - OpenSSH per-connection server daemon (172.24.4.1:39456). Jul 11 07:55:52.869758 systemd-logind[1529]: Removed session 2. Jul 11 07:55:53.002690 kubelet[1671]: E0711 07:55:53.002586 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:55:53.007871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:55:53.008407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:55:53.009299 systemd[1]: kubelet.service: Consumed 2.101s CPU time, 266M memory peak. Jul 11 07:55:53.624526 login[1619]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 07:55:53.635904 systemd-logind[1529]: New session 3 of user core. Jul 11 07:55:53.637159 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 07:55:53.642741 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 07:55:53.659554 systemd-logind[1529]: New session 4 of user core. Jul 11 07:55:53.668435 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 07:55:53.988284 sshd[1683]: Accepted publickey for core from 172.24.4.1 port 39456 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:55:53.992712 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:55:54.007344 systemd-logind[1529]: New session 5 of user core. Jul 11 07:55:54.015499 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 07:55:54.585148 sshd[1714]: Connection closed by 172.24.4.1 port 39456 Jul 11 07:55:54.585029 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jul 11 07:55:54.593403 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jul 11 07:55:54.594894 systemd[1]: sshd@2-172.24.4.10:22-172.24.4.1:39456.service: Deactivated successfully. Jul 11 07:55:54.599712 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 07:55:54.608313 systemd-logind[1529]: Removed session 5. Jul 11 07:55:54.869153 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:54.892019 coreos-metadata[1515]: Jul 11 07:55:54.891 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:55:54.944224 coreos-metadata[1515]: Jul 11 07:55:54.944 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 11 07:55:55.137224 coreos-metadata[1515]: Jul 11 07:55:55.137 INFO Fetch successful Jul 11 07:55:55.138010 coreos-metadata[1515]: Jul 11 07:55:55.137 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 11 07:55:55.154070 coreos-metadata[1515]: Jul 11 07:55:55.153 INFO Fetch successful Jul 11 07:55:55.154311 coreos-metadata[1515]: Jul 11 07:55:55.153 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 11 07:55:55.167050 coreos-metadata[1515]: Jul 11 07:55:55.166 INFO Fetch successful Jul 11 07:55:55.167234 coreos-metadata[1515]: Jul 11 07:55:55.167 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 11 07:55:55.180492 coreos-metadata[1515]: Jul 11 07:55:55.180 INFO Fetch successful Jul 11 07:55:55.180492 coreos-metadata[1515]: Jul 11 07:55:55.180 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 11 07:55:55.194562 coreos-metadata[1515]: Jul 11 07:55:55.194 INFO Fetch successful Jul 11 07:55:55.194562 coreos-metadata[1515]: Jul 11 07:55:55.194 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 11 07:55:55.208183 coreos-metadata[1515]: Jul 11 07:55:55.208 INFO Fetch successful Jul 11 07:55:55.268929 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 11 07:55:55.271611 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 07:55:55.324173 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:55:55.351716 coreos-metadata[1594]: Jul 11 07:55:55.351 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:55:55.398771 coreos-metadata[1594]: Jul 11 07:55:55.398 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 11 07:55:55.414562 coreos-metadata[1594]: Jul 11 07:55:55.414 INFO Fetch successful Jul 11 07:55:55.414847 coreos-metadata[1594]: Jul 11 07:55:55.414 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 11 07:55:55.432407 coreos-metadata[1594]: Jul 11 07:55:55.432 INFO Fetch successful Jul 11 07:55:55.438740 unknown[1594]: wrote ssh authorized keys file for user: core Jul 11 07:55:55.490819 update-ssh-keys[1729]: Updated "/home/core/.ssh/authorized_keys" Jul 11 07:55:55.492394 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 11 07:55:55.499824 systemd[1]: Finished sshkeys.service. Jul 11 07:55:55.502270 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 07:55:55.502880 systemd[1]: Startup finished in 3.964s (kernel) + 21.216s (initrd) + 11.178s (userspace) = 36.359s. Jul 11 07:56:03.199609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 07:56:03.204752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:03.664582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:03.676813 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:56:03.818365 kubelet[1740]: E0711 07:56:03.818230 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:56:03.824398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:56:03.824732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:56:03.825411 systemd[1]: kubelet.service: Consumed 460ms CPU time, 110M memory peak. Jul 11 07:56:04.626969 systemd[1]: Started sshd@3-172.24.4.10:22-172.24.4.1:56628.service - OpenSSH per-connection server daemon (172.24.4.1:56628). Jul 11 07:56:05.732514 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 56628 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:05.735924 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:05.751211 systemd-logind[1529]: New session 6 of user core. Jul 11 07:56:05.762405 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 07:56:06.457510 sshd[1751]: Connection closed by 172.24.4.1 port 56628 Jul 11 07:56:06.460241 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 11 07:56:06.476812 systemd[1]: sshd@3-172.24.4.10:22-172.24.4.1:56628.service: Deactivated successfully. Jul 11 07:56:06.481054 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 07:56:06.483927 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jul 11 07:56:06.492553 systemd[1]: Started sshd@4-172.24.4.10:22-172.24.4.1:56638.service - OpenSSH per-connection server daemon (172.24.4.1:56638). Jul 11 07:56:06.495036 systemd-logind[1529]: Removed session 6. Jul 11 07:56:07.631942 sshd[1757]: Accepted publickey for core from 172.24.4.1 port 56638 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:07.634968 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:07.647023 systemd-logind[1529]: New session 7 of user core. Jul 11 07:56:07.657459 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 07:56:08.357939 sshd[1760]: Connection closed by 172.24.4.1 port 56638 Jul 11 07:56:08.359212 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 11 07:56:08.372156 systemd[1]: sshd@4-172.24.4.10:22-172.24.4.1:56638.service: Deactivated successfully. Jul 11 07:56:08.376739 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 07:56:08.380783 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jul 11 07:56:08.387018 systemd[1]: Started sshd@5-172.24.4.10:22-172.24.4.1:56646.service - OpenSSH per-connection server daemon (172.24.4.1:56646). Jul 11 07:56:08.389990 systemd-logind[1529]: Removed session 7. Jul 11 07:56:09.516755 sshd[1766]: Accepted publickey for core from 172.24.4.1 port 56646 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:09.519708 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:09.533215 systemd-logind[1529]: New session 8 of user core. Jul 11 07:56:09.542390 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 07:56:10.242138 sshd[1769]: Connection closed by 172.24.4.1 port 56646 Jul 11 07:56:10.242436 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jul 11 07:56:10.256994 systemd[1]: sshd@5-172.24.4.10:22-172.24.4.1:56646.service: Deactivated successfully. Jul 11 07:56:10.260893 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 07:56:10.263182 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jul 11 07:56:10.269417 systemd[1]: Started sshd@6-172.24.4.10:22-172.24.4.1:56650.service - OpenSSH per-connection server daemon (172.24.4.1:56650). Jul 11 07:56:10.272608 systemd-logind[1529]: Removed session 8. Jul 11 07:56:11.401864 sshd[1775]: Accepted publickey for core from 172.24.4.1 port 56650 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:11.405180 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:11.418197 systemd-logind[1529]: New session 9 of user core. Jul 11 07:56:11.424394 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 07:56:11.880314 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 07:56:11.880974 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:56:11.904691 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 11 07:56:12.168286 sshd[1778]: Connection closed by 172.24.4.1 port 56650 Jul 11 07:56:12.167634 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jul 11 07:56:12.184315 systemd[1]: sshd@6-172.24.4.10:22-172.24.4.1:56650.service: Deactivated successfully. Jul 11 07:56:12.188396 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 07:56:12.190756 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jul 11 07:56:12.198700 systemd[1]: Started sshd@7-172.24.4.10:22-172.24.4.1:56658.service - OpenSSH per-connection server daemon (172.24.4.1:56658). Jul 11 07:56:12.201375 systemd-logind[1529]: Removed session 9. Jul 11 07:56:13.279824 sshd[1785]: Accepted publickey for core from 172.24.4.1 port 56658 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:13.282831 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:13.296192 systemd-logind[1529]: New session 10 of user core. Jul 11 07:56:13.303400 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 07:56:13.748275 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 07:56:13.749896 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:56:13.764588 sudo[1790]: pam_unix(sudo:session): session closed for user root Jul 11 07:56:13.778055 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 07:56:13.779608 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:56:13.807020 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 07:56:13.843998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 07:56:13.851468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:13.911981 augenrules[1815]: No rules Jul 11 07:56:13.913673 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 07:56:13.914569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 07:56:13.916526 sudo[1789]: pam_unix(sudo:session): session closed for user root Jul 11 07:56:14.141247 sshd[1788]: Connection closed by 172.24.4.1 port 56658 Jul 11 07:56:14.144233 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jul 11 07:56:14.157635 systemd[1]: sshd@7-172.24.4.10:22-172.24.4.1:56658.service: Deactivated successfully. Jul 11 07:56:14.162327 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 07:56:14.167837 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jul 11 07:56:14.173601 systemd[1]: Started sshd@8-172.24.4.10:22-172.24.4.1:58500.service - OpenSSH per-connection server daemon (172.24.4.1:58500). Jul 11 07:56:14.178219 systemd-logind[1529]: Removed session 10. Jul 11 07:56:14.252360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:14.275788 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:56:14.381005 kubelet[1831]: E0711 07:56:14.380886 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:56:14.386452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:56:14.386838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:56:14.387698 systemd[1]: kubelet.service: Consumed 362ms CPU time, 108.3M memory peak. Jul 11 07:56:15.293842 sshd[1824]: Accepted publickey for core from 172.24.4.1 port 58500 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:56:15.296708 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:56:15.308036 systemd-logind[1529]: New session 11 of user core. Jul 11 07:56:15.321384 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 07:56:15.760825 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 07:56:15.761591 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:56:16.982008 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 07:56:17.011447 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 07:56:17.544220 dockerd[1857]: time="2025-07-11T07:56:17.544113901Z" level=info msg="Starting up" Jul 11 07:56:17.547067 dockerd[1857]: time="2025-07-11T07:56:17.547018018Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 07:56:17.572327 dockerd[1857]: time="2025-07-11T07:56:17.572261151Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 07:56:17.610378 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport357716347-merged.mount: Deactivated successfully. Jul 11 07:56:17.641978 systemd[1]: var-lib-docker-metacopy\x2dcheck119912598-merged.mount: Deactivated successfully. Jul 11 07:56:17.682697 dockerd[1857]: time="2025-07-11T07:56:17.682571178Z" level=info msg="Loading containers: start." Jul 11 07:56:17.717196 kernel: Initializing XFRM netlink socket Jul 11 07:56:18.090170 systemd-networkd[1448]: docker0: Link UP Jul 11 07:56:18.097985 dockerd[1857]: time="2025-07-11T07:56:18.097276427Z" level=info msg="Loading containers: done." Jul 11 07:56:18.120017 dockerd[1857]: time="2025-07-11T07:56:18.119965501Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 07:56:18.120358 dockerd[1857]: time="2025-07-11T07:56:18.120330806Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 07:56:18.120633 dockerd[1857]: time="2025-07-11T07:56:18.120609479Z" level=info msg="Initializing buildkit" Jul 11 07:56:18.180876 dockerd[1857]: time="2025-07-11T07:56:18.180785228Z" level=info msg="Completed buildkit initialization" Jul 11 07:56:18.193193 dockerd[1857]: time="2025-07-11T07:56:18.193148505Z" level=info msg="Daemon has completed initialization" Jul 11 07:56:18.193595 dockerd[1857]: time="2025-07-11T07:56:18.193461312Z" level=info msg="API listen on /run/docker.sock" Jul 11 07:56:18.195280 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 07:56:18.603264 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1814602467-merged.mount: Deactivated successfully. Jul 11 07:56:19.725181 containerd[1555]: time="2025-07-11T07:56:19.723906311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 07:56:20.590885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711542307.mount: Deactivated successfully. Jul 11 07:56:22.428384 containerd[1555]: time="2025-07-11T07:56:22.428310335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:22.430176 containerd[1555]: time="2025-07-11T07:56:22.430143318Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 11 07:56:22.430971 containerd[1555]: time="2025-07-11T07:56:22.430933861Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:22.435054 containerd[1555]: time="2025-07-11T07:56:22.434991471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:22.441694 containerd[1555]: time="2025-07-11T07:56:22.441245902Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.716831533s" Jul 11 07:56:22.441694 containerd[1555]: time="2025-07-11T07:56:22.441323720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 07:56:22.442265 containerd[1555]: time="2025-07-11T07:56:22.442227500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 07:56:24.447851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 07:56:24.449812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:24.466815 containerd[1555]: time="2025-07-11T07:56:24.466176778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:24.468058 containerd[1555]: time="2025-07-11T07:56:24.467757628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 11 07:56:24.468938 containerd[1555]: time="2025-07-11T07:56:24.468809260Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:24.472211 containerd[1555]: time="2025-07-11T07:56:24.472064256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:24.473962 containerd[1555]: time="2025-07-11T07:56:24.473749841Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.031487664s" Jul 11 07:56:24.474306 containerd[1555]: time="2025-07-11T07:56:24.474286421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 07:56:24.475305 containerd[1555]: time="2025-07-11T07:56:24.475283783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 07:56:24.649141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:24.660351 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:56:24.994230 kubelet[2135]: E0711 07:56:24.994057 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:56:24.999485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:56:25.000167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:56:25.001651 systemd[1]: kubelet.service: Consumed 303ms CPU time, 108.9M memory peak. Jul 11 07:56:26.650679 containerd[1555]: time="2025-07-11T07:56:26.650579713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:26.652585 containerd[1555]: time="2025-07-11T07:56:26.652555622Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 11 07:56:26.652931 containerd[1555]: time="2025-07-11T07:56:26.652905648Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:26.656434 containerd[1555]: time="2025-07-11T07:56:26.656372522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:26.657801 containerd[1555]: time="2025-07-11T07:56:26.657762455Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.182333159s" Jul 11 07:56:26.657968 containerd[1555]: time="2025-07-11T07:56:26.657948692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 07:56:26.660641 containerd[1555]: time="2025-07-11T07:56:26.660617680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 07:56:28.066123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810832778.mount: Deactivated successfully. Jul 11 07:56:28.700713 containerd[1555]: time="2025-07-11T07:56:28.700647309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:28.702379 containerd[1555]: time="2025-07-11T07:56:28.702093158Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 11 07:56:28.703650 containerd[1555]: time="2025-07-11T07:56:28.703612833Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:28.706333 containerd[1555]: time="2025-07-11T07:56:28.706298056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:28.706969 containerd[1555]: time="2025-07-11T07:56:28.706931837Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.046184166s" Jul 11 07:56:28.707119 containerd[1555]: time="2025-07-11T07:56:28.707096771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 07:56:28.707984 containerd[1555]: time="2025-07-11T07:56:28.707930747Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 07:56:29.326986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645459567.mount: Deactivated successfully. Jul 11 07:56:30.664035 containerd[1555]: time="2025-07-11T07:56:30.663634927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:30.667299 containerd[1555]: time="2025-07-11T07:56:30.667148712Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 11 07:56:30.669367 containerd[1555]: time="2025-07-11T07:56:30.669008929Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:30.683145 containerd[1555]: time="2025-07-11T07:56:30.682434011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:30.700649 containerd[1555]: time="2025-07-11T07:56:30.700502112Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.992520917s" Jul 11 07:56:30.700979 containerd[1555]: time="2025-07-11T07:56:30.700929065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 07:56:30.708851 containerd[1555]: time="2025-07-11T07:56:30.708767375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 07:56:31.265479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963923497.mount: Deactivated successfully. Jul 11 07:56:31.277847 containerd[1555]: time="2025-07-11T07:56:31.277643638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:56:31.280069 containerd[1555]: time="2025-07-11T07:56:31.279916633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 11 07:56:31.282121 containerd[1555]: time="2025-07-11T07:56:31.281836440Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:56:31.289992 containerd[1555]: time="2025-07-11T07:56:31.289869637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:56:31.292125 containerd[1555]: time="2025-07-11T07:56:31.291864705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.788533ms" Jul 11 07:56:31.292125 containerd[1555]: time="2025-07-11T07:56:31.291996667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 07:56:31.294872 containerd[1555]: time="2025-07-11T07:56:31.294818651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 07:56:31.957028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870132772.mount: Deactivated successfully. Jul 11 07:56:33.159597 update_engine[1531]: I20250711 07:56:33.159308 1531 update_attempter.cc:509] Updating boot flags... Jul 11 07:56:34.950038 containerd[1555]: time="2025-07-11T07:56:34.949979353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:34.951580 containerd[1555]: time="2025-07-11T07:56:34.951534227Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 11 07:56:34.953111 containerd[1555]: time="2025-07-11T07:56:34.953044121Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:34.957702 containerd[1555]: time="2025-07-11T07:56:34.957628530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:34.959333 containerd[1555]: time="2025-07-11T07:56:34.958809801Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.663527029s" Jul 11 07:56:34.959333 containerd[1555]: time="2025-07-11T07:56:34.958840546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 07:56:35.198410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 11 07:56:35.204313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:35.777230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:35.789909 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:56:35.854115 kubelet[2300]: E0711 07:56:35.854041 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:56:35.856977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:56:35.857266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:56:35.858020 systemd[1]: kubelet.service: Consumed 574ms CPU time, 108.9M memory peak. Jul 11 07:56:39.490859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:39.492290 systemd[1]: kubelet.service: Consumed 574ms CPU time, 108.9M memory peak. Jul 11 07:56:39.497570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:39.551960 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-11.scope)... Jul 11 07:56:39.552027 systemd[1]: Reloading... Jul 11 07:56:39.651124 zram_generator::config[2373]: No configuration found. Jul 11 07:56:40.054314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:56:40.202590 systemd[1]: Reloading finished in 650 ms. Jul 11 07:56:40.593435 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 07:56:40.593665 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 07:56:40.594475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:40.594590 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. Jul 11 07:56:40.599620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:40.788797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:40.804517 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 07:56:40.860947 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:56:40.861974 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 07:56:40.861974 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:56:40.861974 kubelet[2436]: I0711 07:56:40.861477 2436 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 07:56:41.304135 kubelet[2436]: I0711 07:56:41.302642 2436 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 07:56:41.304135 kubelet[2436]: I0711 07:56:41.302729 2436 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 07:56:41.304135 kubelet[2436]: I0711 07:56:41.303398 2436 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 07:56:41.338294 kubelet[2436]: E0711 07:56:41.338234 2436 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:41.341326 kubelet[2436]: I0711 07:56:41.341127 2436 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 07:56:41.353188 kubelet[2436]: I0711 07:56:41.353124 2436 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 07:56:41.364144 kubelet[2436]: I0711 07:56:41.363295 2436 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 07:56:41.364144 kubelet[2436]: I0711 07:56:41.363445 2436 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 07:56:41.364144 kubelet[2436]: I0711 07:56:41.363582 2436 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 07:56:41.364353 kubelet[2436]: I0711 07:56:41.363617 2436 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4392-0-0-n-91c7dbf1fc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 07:56:41.364353 kubelet[2436]: I0711 07:56:41.363926 2436 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 07:56:41.364353 kubelet[2436]: I0711 07:56:41.363937 2436 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 07:56:41.364353 kubelet[2436]: I0711 07:56:41.364104 2436 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:56:41.369130 kubelet[2436]: I0711 07:56:41.369114 2436 kubelet.go:408] "Attempting to sync node with API server" Jul 11 07:56:41.369223 kubelet[2436]: I0711 07:56:41.369211 2436 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 07:56:41.369332 kubelet[2436]: I0711 07:56:41.369321 2436 kubelet.go:314] "Adding apiserver pod source" Jul 11 07:56:41.369432 kubelet[2436]: I0711 07:56:41.369420 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 07:56:41.373185 kubelet[2436]: W0711 07:56:41.372965 2436 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-91c7dbf1fc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.10:6443: connect: connection refused Jul 11 07:56:41.373287 kubelet[2436]: E0711 07:56:41.373234 2436 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-91c7dbf1fc.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:41.374598 kubelet[2436]: W0711 07:56:41.374512 2436 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.10:6443: connect: connection refused Jul 11 07:56:41.374702 kubelet[2436]: E0711 07:56:41.374653 2436 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:41.376936 kubelet[2436]: I0711 07:56:41.375364 2436 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 07:56:41.376936 kubelet[2436]: I0711 07:56:41.376534 2436 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 07:56:41.376936 kubelet[2436]: W0711 07:56:41.376699 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 07:56:41.381923 kubelet[2436]: I0711 07:56:41.381905 2436 server.go:1274] "Started kubelet" Jul 11 07:56:41.384680 kubelet[2436]: I0711 07:56:41.384659 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 07:56:41.395144 kubelet[2436]: E0711 07:56:41.392213 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.10:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4392-0-0-n-91c7dbf1fc.novalocal.1851235a717cc0e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4392-0-0-n-91c7dbf1fc.novalocal,UID:ci-4392-0-0-n-91c7dbf1fc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-91c7dbf1fc.novalocal,},FirstTimestamp:2025-07-11 07:56:41.381863656 +0000 UTC m=+0.566508492,LastTimestamp:2025-07-11 07:56:41.381863656 +0000 UTC m=+0.566508492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-91c7dbf1fc.novalocal,}" Jul 11 07:56:41.397474 kubelet[2436]: I0711 07:56:41.397205 2436 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 07:56:41.398338 kubelet[2436]: I0711 07:56:41.398253 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 07:56:41.398996 kubelet[2436]: I0711 07:56:41.398977 2436 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 07:56:41.400972 kubelet[2436]: I0711 07:56:41.399153 2436 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 07:56:41.401455 kubelet[2436]: I0711 07:56:41.399183 2436 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 07:56:41.401940 kubelet[2436]: E0711 07:56:41.399357 2436 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" not found" Jul 11 07:56:41.402016 kubelet[2436]: I0711 07:56:41.399845 2436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 07:56:41.402296 kubelet[2436]: I0711 07:56:41.401748 2436 server.go:449] "Adding debug handlers to kubelet server" Jul 11 07:56:41.405067 kubelet[2436]: E0711 07:56:41.404817 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-91c7dbf1fc.novalocal?timeout=10s\": dial tcp 172.24.4.10:6443: connect: connection refused" interval="200ms" Jul 11 07:56:41.406221 kubelet[2436]: E0711 07:56:41.406197 2436 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 07:56:41.406500 kubelet[2436]: W0711 07:56:41.406457 2436 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.10:6443: connect: connection refused Jul 11 07:56:41.406595 kubelet[2436]: E0711 07:56:41.406579 2436 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:41.406917 kubelet[2436]: I0711 07:56:41.406898 2436 factory.go:221] Registration of the systemd container factory successfully Jul 11 07:56:41.407100 kubelet[2436]: I0711 07:56:41.407061 2436 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 07:56:41.407356 kubelet[2436]: I0711 07:56:41.401801 2436 reconciler.go:26] "Reconciler: start to sync state" Jul 11 07:56:41.411406 kubelet[2436]: I0711 07:56:41.411348 2436 factory.go:221] Registration of the containerd container factory successfully Jul 11 07:56:41.433291 kubelet[2436]: I0711 07:56:41.433254 2436 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 07:56:41.433740 kubelet[2436]: I0711 07:56:41.433475 2436 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 07:56:41.433740 kubelet[2436]: I0711 07:56:41.433504 2436 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:56:41.442309 kubelet[2436]: I0711 07:56:41.441382 2436 policy_none.go:49] "None policy: Start" Jul 11 07:56:41.443100 kubelet[2436]: I0711 07:56:41.443034 2436 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 07:56:41.443100 kubelet[2436]: I0711 07:56:41.443065 2436 state_mem.go:35] "Initializing new in-memory state store" Jul 11 07:56:41.446186 kubelet[2436]: I0711 07:56:41.446140 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 07:56:41.448025 kubelet[2436]: I0711 07:56:41.448008 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 07:56:41.448532 kubelet[2436]: I0711 07:56:41.448449 2436 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 07:56:41.448892 kubelet[2436]: I0711 07:56:41.448816 2436 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 07:56:41.449003 kubelet[2436]: E0711 07:56:41.448981 2436 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 07:56:41.452255 kubelet[2436]: W0711 07:56:41.452229 2436 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.10:6443: connect: connection refused Jul 11 07:56:41.452564 kubelet[2436]: E0711 07:56:41.452359 2436 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:41.455419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 07:56:41.463631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 07:56:41.467454 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 07:56:41.478055 kubelet[2436]: I0711 07:56:41.478015 2436 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 07:56:41.478260 kubelet[2436]: I0711 07:56:41.478236 2436 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 07:56:41.478304 kubelet[2436]: I0711 07:56:41.478256 2436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 07:56:41.478787 kubelet[2436]: I0711 07:56:41.478670 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 07:56:41.481011 kubelet[2436]: E0711 07:56:41.480981 2436 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" not found" Jul 11 07:56:41.574161 systemd[1]: Created slice kubepods-burstable-pod39e8648f76be5b65cce8c3621dbca7d7.slice - libcontainer container kubepods-burstable-pod39e8648f76be5b65cce8c3621dbca7d7.slice. Jul 11 07:56:41.584499 kubelet[2436]: I0711 07:56:41.584156 2436 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.586710 kubelet[2436]: E0711 07:56:41.586583 2436 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.10:6443/api/v1/nodes\": dial tcp 172.24.4.10:6443: connect: connection refused" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.602217 systemd[1]: Created slice kubepods-burstable-pod8ffeeaaaf113dabfe4eb185a2146a5cb.slice - libcontainer container kubepods-burstable-pod8ffeeaaaf113dabfe4eb185a2146a5cb.slice. Jul 11 07:56:41.607428 kubelet[2436]: E0711 07:56:41.607331 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-91c7dbf1fc.novalocal?timeout=10s\": dial tcp 172.24.4.10:6443: connect: connection refused" interval="400ms" Jul 11 07:56:41.609396 kubelet[2436]: I0711 07:56:41.609344 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610368 kubelet[2436]: I0711 07:56:41.610038 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ffeeaaaf113dabfe4eb185a2146a5cb-kubeconfig\") pod \"kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"8ffeeaaaf113dabfe4eb185a2146a5cb\") " pod="kube-system/kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610368 kubelet[2436]: I0711 07:56:41.610191 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-ca-certs\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610368 kubelet[2436]: I0711 07:56:41.610247 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610368 kubelet[2436]: I0711 07:56:41.610304 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-ca-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610368 kubelet[2436]: I0711 07:56:41.610354 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-flexvolume-dir\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610880 kubelet[2436]: I0711 07:56:41.610433 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-k8s-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610880 kubelet[2436]: I0711 07:56:41.610483 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-kubeconfig\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.610880 kubelet[2436]: I0711 07:56:41.610535 2436 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-k8s-certs\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.618919 systemd[1]: Created slice kubepods-burstable-pod9354c4f2775932dc0ca61fc4ccf25c71.slice - libcontainer container kubepods-burstable-pod9354c4f2775932dc0ca61fc4ccf25c71.slice. Jul 11 07:56:41.791057 kubelet[2436]: I0711 07:56:41.790983 2436 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.791923 kubelet[2436]: E0711 07:56:41.791831 2436 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.10:6443/api/v1/nodes\": dial tcp 172.24.4.10:6443: connect: connection refused" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:41.900248 containerd[1555]: time="2025-07-11T07:56:41.899027564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:39e8648f76be5b65cce8c3621dbca7d7,Namespace:kube-system,Attempt:0,}" Jul 11 07:56:41.913261 containerd[1555]: time="2025-07-11T07:56:41.913129303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:8ffeeaaaf113dabfe4eb185a2146a5cb,Namespace:kube-system,Attempt:0,}" Jul 11 07:56:41.925627 containerd[1555]: time="2025-07-11T07:56:41.925535406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:9354c4f2775932dc0ca61fc4ccf25c71,Namespace:kube-system,Attempt:0,}" Jul 11 07:56:42.012524 kubelet[2436]: E0711 07:56:42.011071 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-91c7dbf1fc.novalocal?timeout=10s\": dial tcp 172.24.4.10:6443: connect: connection refused" interval="800ms" Jul 11 07:56:42.024600 containerd[1555]: time="2025-07-11T07:56:42.024525492Z" level=info msg="connecting to shim b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96" address="unix:///run/containerd/s/a8afb29385c478bd35d5023c34ec81fb0cdeee37d54369098483e924962a405e" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:56:42.033491 containerd[1555]: time="2025-07-11T07:56:42.033436209Z" level=info msg="connecting to shim 42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f" address="unix:///run/containerd/s/99bc84caf620befde4112a6440ce8648e138c3f26dbcc86640781be03fd6695e" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:56:42.036134 containerd[1555]: time="2025-07-11T07:56:42.035013008Z" level=info msg="connecting to shim 0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc" address="unix:///run/containerd/s/06d484bdffb36412ae5aa276a304635bf8871a1450d91953dccfd38ca6c323c3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:56:42.092273 systemd[1]: Started cri-containerd-42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f.scope - libcontainer container 42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f. Jul 11 07:56:42.095212 systemd[1]: Started cri-containerd-b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96.scope - libcontainer container b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96. Jul 11 07:56:42.101454 systemd[1]: Started cri-containerd-0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc.scope - libcontainer container 0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc. Jul 11 07:56:42.185625 containerd[1555]: time="2025-07-11T07:56:42.185501071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:39e8648f76be5b65cce8c3621dbca7d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96\"" Jul 11 07:56:42.191821 containerd[1555]: time="2025-07-11T07:56:42.191645481Z" level=info msg="CreateContainer within sandbox \"b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 07:56:42.194794 kubelet[2436]: I0711 07:56:42.194718 2436 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:42.196282 kubelet[2436]: E0711 07:56:42.196246 2436 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.10:6443/api/v1/nodes\": dial tcp 172.24.4.10:6443: connect: connection refused" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:42.198040 containerd[1555]: time="2025-07-11T07:56:42.197872873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:9354c4f2775932dc0ca61fc4ccf25c71,Namespace:kube-system,Attempt:0,} returns sandbox id \"42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f\"" Jul 11 07:56:42.204502 containerd[1555]: time="2025-07-11T07:56:42.204463903Z" level=info msg="CreateContainer within sandbox \"42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 07:56:42.205982 containerd[1555]: time="2025-07-11T07:56:42.205954555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal,Uid:8ffeeaaaf113dabfe4eb185a2146a5cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc\"" Jul 11 07:56:42.214804 containerd[1555]: time="2025-07-11T07:56:42.214756464Z" level=info msg="CreateContainer within sandbox \"0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 07:56:42.219417 containerd[1555]: time="2025-07-11T07:56:42.219383914Z" level=info msg="Container 4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:56:42.230457 containerd[1555]: time="2025-07-11T07:56:42.230422779Z" level=info msg="Container 1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:56:42.239487 containerd[1555]: time="2025-07-11T07:56:42.239457561Z" level=info msg="Container 70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:56:42.241307 containerd[1555]: time="2025-07-11T07:56:42.241278482Z" level=info msg="CreateContainer within sandbox \"b09a52bbe85f28424b0bfe8de3cc659cadf3aab34e17c16dc24e203c376e9d96\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9\"" Jul 11 07:56:42.245201 containerd[1555]: time="2025-07-11T07:56:42.245169237Z" level=info msg="CreateContainer within sandbox \"42cea573809849024d38af11aea063e283d50767c172beff8fbec5fa7d2c0a2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863\"" Jul 11 07:56:42.245581 containerd[1555]: time="2025-07-11T07:56:42.245555706Z" level=info msg="StartContainer for \"4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9\"" Jul 11 07:56:42.248007 containerd[1555]: time="2025-07-11T07:56:42.247980483Z" level=info msg="connecting to shim 4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9" address="unix:///run/containerd/s/a8afb29385c478bd35d5023c34ec81fb0cdeee37d54369098483e924962a405e" protocol=ttrpc version=3 Jul 11 07:56:42.249172 containerd[1555]: time="2025-07-11T07:56:42.249150184Z" level=info msg="StartContainer for \"1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863\"" Jul 11 07:56:42.251103 containerd[1555]: time="2025-07-11T07:56:42.251048836Z" level=info msg="connecting to shim 1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863" address="unix:///run/containerd/s/99bc84caf620befde4112a6440ce8648e138c3f26dbcc86640781be03fd6695e" protocol=ttrpc version=3 Jul 11 07:56:42.254232 containerd[1555]: time="2025-07-11T07:56:42.254147985Z" level=info msg="CreateContainer within sandbox \"0e6644dd6c13cc52db72ff3dd3cfdb2c151173f2718982d2bbbb45ce3597f6cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975\"" Jul 11 07:56:42.257105 containerd[1555]: time="2025-07-11T07:56:42.256109212Z" level=info msg="StartContainer for \"70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975\"" Jul 11 07:56:42.257766 containerd[1555]: time="2025-07-11T07:56:42.257742122Z" level=info msg="connecting to shim 70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975" address="unix:///run/containerd/s/06d484bdffb36412ae5aa276a304635bf8871a1450d91953dccfd38ca6c323c3" protocol=ttrpc version=3 Jul 11 07:56:42.258963 kubelet[2436]: W0711 07:56:42.258909 2436 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.10:6443: connect: connection refused Jul 11 07:56:42.259158 kubelet[2436]: E0711 07:56:42.259136 2436 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.10:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:56:42.278295 systemd[1]: Started cri-containerd-4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9.scope - libcontainer container 4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9. Jul 11 07:56:42.289260 systemd[1]: Started cri-containerd-1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863.scope - libcontainer container 1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863. Jul 11 07:56:42.296457 systemd[1]: Started cri-containerd-70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975.scope - libcontainer container 70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975. Jul 11 07:56:42.376343 containerd[1555]: time="2025-07-11T07:56:42.376284994Z" level=info msg="StartContainer for \"4acac36853dd560c6a5e112cec351ac9a9f00f2d6df7aa1d5e86ab96124f08c9\" returns successfully" Jul 11 07:56:42.391367 containerd[1555]: time="2025-07-11T07:56:42.391328190Z" level=info msg="StartContainer for \"1d66ff7f0559db84bdb6d5ea7583e64b9a60d102440ad5d56032af2de74c3863\" returns successfully" Jul 11 07:56:42.428881 containerd[1555]: time="2025-07-11T07:56:42.428843730Z" level=info msg="StartContainer for \"70a61d9cdee0abc649764642073c770be8a1fa731a8287772440ace82afcf975\" returns successfully" Jul 11 07:56:43.000021 kubelet[2436]: I0711 07:56:42.999987 2436 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:44.854191 kubelet[2436]: E0711 07:56:44.853845 2436 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" not found" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:44.937249 kubelet[2436]: I0711 07:56:44.937204 2436 kubelet_node_status.go:75] "Successfully registered node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:44.937249 kubelet[2436]: E0711 07:56:44.937253 2436 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\": node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" not found" Jul 11 07:56:45.378894 kubelet[2436]: I0711 07:56:45.378793 2436 apiserver.go:52] "Watching apiserver" Jul 11 07:56:45.403340 kubelet[2436]: I0711 07:56:45.403269 2436 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 07:56:46.132173 kubelet[2436]: W0711 07:56:46.131225 2436 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:56:48.206854 systemd[1]: Reload requested from client PID 2706 ('systemctl') (unit session-11.scope)... Jul 11 07:56:48.206932 systemd[1]: Reloading... Jul 11 07:56:48.340118 zram_generator::config[2757]: No configuration found. Jul 11 07:56:48.508663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:56:48.684421 systemd[1]: Reloading finished in 476 ms. Jul 11 07:56:48.737165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:48.749719 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 07:56:48.750195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:48.750298 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 129.6M memory peak. Jul 11 07:56:48.755117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:56:49.126168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:56:49.140491 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 07:56:49.264097 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:56:49.264097 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 07:56:49.264097 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:56:49.264097 kubelet[2815]: I0711 07:56:49.263781 2815 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 07:56:49.277391 kubelet[2815]: I0711 07:56:49.277347 2815 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 07:56:49.277570 kubelet[2815]: I0711 07:56:49.277557 2815 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 07:56:49.278234 kubelet[2815]: I0711 07:56:49.277990 2815 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 07:56:49.281538 kubelet[2815]: I0711 07:56:49.281517 2815 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 07:56:49.284891 kubelet[2815]: I0711 07:56:49.284839 2815 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 07:56:49.304142 kubelet[2815]: I0711 07:56:49.304107 2815 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 07:56:49.311747 kubelet[2815]: I0711 07:56:49.310758 2815 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 07:56:49.311747 kubelet[2815]: I0711 07:56:49.310980 2815 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 07:56:49.311747 kubelet[2815]: I0711 07:56:49.311183 2815 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 07:56:49.312093 kubelet[2815]: I0711 07:56:49.311237 2815 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4392-0-0-n-91c7dbf1fc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 07:56:49.312093 kubelet[2815]: I0711 07:56:49.311778 2815 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 07:56:49.312093 kubelet[2815]: I0711 07:56:49.311800 2815 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 07:56:49.312093 kubelet[2815]: I0711 07:56:49.311884 2815 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:56:49.312635 kubelet[2815]: I0711 07:56:49.312098 2815 kubelet.go:408] "Attempting to sync node with API server" Jul 11 07:56:49.312635 kubelet[2815]: I0711 07:56:49.312128 2815 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 07:56:49.312635 kubelet[2815]: I0711 07:56:49.312214 2815 kubelet.go:314] "Adding apiserver pod source" Jul 11 07:56:49.312635 kubelet[2815]: I0711 07:56:49.312260 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 07:56:49.321205 kubelet[2815]: I0711 07:56:49.319788 2815 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 07:56:49.321205 kubelet[2815]: I0711 07:56:49.320340 2815 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 07:56:49.321205 kubelet[2815]: I0711 07:56:49.320869 2815 server.go:1274] "Started kubelet" Jul 11 07:56:49.337732 kubelet[2815]: I0711 07:56:49.337684 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 07:56:49.356094 kubelet[2815]: I0711 07:56:49.354149 2815 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 07:56:49.365013 kubelet[2815]: I0711 07:56:49.364973 2815 server.go:449] "Adding debug handlers to kubelet server" Jul 11 07:56:49.376249 kubelet[2815]: I0711 07:56:49.375956 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 07:56:49.376249 kubelet[2815]: I0711 07:56:49.376111 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 07:56:49.378777 kubelet[2815]: I0711 07:56:49.378673 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 07:56:49.383119 kubelet[2815]: I0711 07:56:49.383052 2815 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 07:56:49.386104 kubelet[2815]: I0711 07:56:49.384152 2815 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 07:56:49.386104 kubelet[2815]: I0711 07:56:49.385293 2815 reconciler.go:26] "Reconciler: start to sync state" Jul 11 07:56:49.388796 kubelet[2815]: I0711 07:56:49.387712 2815 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 07:56:49.391508 kubelet[2815]: I0711 07:56:49.391467 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 07:56:49.391584 kubelet[2815]: I0711 07:56:49.391530 2815 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 07:56:49.391584 kubelet[2815]: I0711 07:56:49.391563 2815 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 07:56:49.391673 kubelet[2815]: E0711 07:56:49.391611 2815 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 07:56:49.407028 kubelet[2815]: I0711 07:56:49.406988 2815 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 07:56:49.410717 kubelet[2815]: E0711 07:56:49.410021 2815 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 07:56:49.414362 kubelet[2815]: I0711 07:56:49.414324 2815 factory.go:221] Registration of the containerd container factory successfully Jul 11 07:56:49.414362 kubelet[2815]: I0711 07:56:49.414356 2815 factory.go:221] Registration of the systemd container factory successfully Jul 11 07:56:49.481562 kubelet[2815]: I0711 07:56:49.481318 2815 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 07:56:49.481562 kubelet[2815]: I0711 07:56:49.481368 2815 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 07:56:49.481562 kubelet[2815]: I0711 07:56:49.481400 2815 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:56:49.481908 kubelet[2815]: I0711 07:56:49.481859 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 07:56:49.481990 kubelet[2815]: I0711 07:56:49.481900 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 07:56:49.481990 kubelet[2815]: I0711 07:56:49.481929 2815 policy_none.go:49] "None policy: Start" Jul 11 07:56:49.483228 kubelet[2815]: I0711 07:56:49.483200 2815 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 07:56:49.483228 kubelet[2815]: I0711 07:56:49.483227 2815 state_mem.go:35] "Initializing new in-memory state store" Jul 11 07:56:49.483434 kubelet[2815]: I0711 07:56:49.483375 2815 state_mem.go:75] "Updated machine memory state" Jul 11 07:56:49.491699 kubelet[2815]: E0711 07:56:49.491653 2815 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 07:56:49.493430 kubelet[2815]: I0711 07:56:49.493409 2815 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 07:56:49.495171 kubelet[2815]: I0711 07:56:49.495033 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 07:56:49.496303 kubelet[2815]: I0711 07:56:49.495067 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 07:56:49.496418 kubelet[2815]: I0711 07:56:49.496347 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 07:56:49.617288 kubelet[2815]: I0711 07:56:49.616498 2815 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.638176 kubelet[2815]: I0711 07:56:49.637898 2815 kubelet_node_status.go:111] "Node was previously registered" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.638490 kubelet[2815]: I0711 07:56:49.638368 2815 kubelet_node_status.go:75] "Successfully registered node" node="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.706943 kubelet[2815]: W0711 07:56:49.706729 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:56:49.712259 kubelet[2815]: W0711 07:56:49.711164 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:56:49.715807 kubelet[2815]: W0711 07:56:49.715632 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:56:49.716460 kubelet[2815]: E0711 07:56:49.715740 2815 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.787730 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-kubeconfig\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.787828 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.787969 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ffeeaaaf113dabfe4eb185a2146a5cb-kubeconfig\") pod \"kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"8ffeeaaaf113dabfe4eb185a2146a5cb\") " pod="kube-system/kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.788025 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-ca-certs\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.788071 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-k8s-certs\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.788170 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39e8648f76be5b65cce8c3621dbca7d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"39e8648f76be5b65cce8c3621dbca7d7\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.788215 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-ca-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.788343 kubelet[2815]: I0711 07:56:49.788258 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-flexvolume-dir\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:49.789316 kubelet[2815]: I0711 07:56:49.788741 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9354c4f2775932dc0ca61fc4ccf25c71-k8s-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal\" (UID: \"9354c4f2775932dc0ca61fc4ccf25c71\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:50.316112 kubelet[2815]: I0711 07:56:50.315975 2815 apiserver.go:52] "Watching apiserver" Jul 11 07:56:50.385263 kubelet[2815]: I0711 07:56:50.385175 2815 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 07:56:50.470353 kubelet[2815]: W0711 07:56:50.469619 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:56:50.470353 kubelet[2815]: E0711 07:56:50.469735 2815 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:56:50.511751 kubelet[2815]: I0711 07:56:50.511662 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4392-0-0-n-91c7dbf1fc.novalocal" podStartSLOduration=1.5116265960000002 podStartE2EDuration="1.511626596s" podCreationTimestamp="2025-07-11 07:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:56:50.506227972 +0000 UTC m=+1.346638817" watchObservedRunningTime="2025-07-11 07:56:50.511626596 +0000 UTC m=+1.352037441" Jul 11 07:56:50.525867 kubelet[2815]: I0711 07:56:50.524868 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4392-0-0-n-91c7dbf1fc.novalocal" podStartSLOduration=1.524849132 podStartE2EDuration="1.524849132s" podCreationTimestamp="2025-07-11 07:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:56:50.524185613 +0000 UTC m=+1.364596458" watchObservedRunningTime="2025-07-11 07:56:50.524849132 +0000 UTC m=+1.365259978" Jul 11 07:56:50.558358 kubelet[2815]: I0711 07:56:50.557442 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-91c7dbf1fc.novalocal" podStartSLOduration=5.557423083 podStartE2EDuration="5.557423083s" podCreationTimestamp="2025-07-11 07:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:56:50.541947525 +0000 UTC m=+1.382358360" watchObservedRunningTime="2025-07-11 07:56:50.557423083 +0000 UTC m=+1.397833928" Jul 11 07:56:53.102470 kubelet[2815]: I0711 07:56:53.101873 2815 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 07:56:53.111240 kubelet[2815]: I0711 07:56:53.106312 2815 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 07:56:53.111470 containerd[1555]: time="2025-07-11T07:56:53.105159872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 07:56:53.686066 systemd[1]: Created slice kubepods-besteffort-podeb870962_7743_4617_9f65_372f95595dd4.slice - libcontainer container kubepods-besteffort-podeb870962_7743_4617_9f65_372f95595dd4.slice. Jul 11 07:56:53.727224 kubelet[2815]: I0711 07:56:53.726943 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb870962-7743-4617-9f65-372f95595dd4-kube-proxy\") pod \"kube-proxy-chnq5\" (UID: \"eb870962-7743-4617-9f65-372f95595dd4\") " pod="kube-system/kube-proxy-chnq5" Jul 11 07:56:53.727224 kubelet[2815]: I0711 07:56:53.726994 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb870962-7743-4617-9f65-372f95595dd4-xtables-lock\") pod \"kube-proxy-chnq5\" (UID: \"eb870962-7743-4617-9f65-372f95595dd4\") " pod="kube-system/kube-proxy-chnq5" Jul 11 07:56:53.727224 kubelet[2815]: I0711 07:56:53.727031 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb870962-7743-4617-9f65-372f95595dd4-lib-modules\") pod \"kube-proxy-chnq5\" (UID: \"eb870962-7743-4617-9f65-372f95595dd4\") " pod="kube-system/kube-proxy-chnq5" Jul 11 07:56:53.727224 kubelet[2815]: I0711 07:56:53.727060 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzsht\" (UniqueName: \"kubernetes.io/projected/eb870962-7743-4617-9f65-372f95595dd4-kube-api-access-bzsht\") pod \"kube-proxy-chnq5\" (UID: \"eb870962-7743-4617-9f65-372f95595dd4\") " pod="kube-system/kube-proxy-chnq5" Jul 11 07:56:53.845482 kubelet[2815]: E0711 07:56:53.845167 2815 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 07:56:53.845482 kubelet[2815]: E0711 07:56:53.845328 2815 projected.go:194] Error preparing data for projected volume kube-api-access-bzsht for pod kube-system/kube-proxy-chnq5: configmap "kube-root-ca.crt" not found Jul 11 07:56:53.847225 kubelet[2815]: E0711 07:56:53.846178 2815 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb870962-7743-4617-9f65-372f95595dd4-kube-api-access-bzsht podName:eb870962-7743-4617-9f65-372f95595dd4 nodeName:}" failed. No retries permitted until 2025-07-11 07:56:54.345913456 +0000 UTC m=+5.186324341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bzsht" (UniqueName: "kubernetes.io/projected/eb870962-7743-4617-9f65-372f95595dd4-kube-api-access-bzsht") pod "kube-proxy-chnq5" (UID: "eb870962-7743-4617-9f65-372f95595dd4") : configmap "kube-root-ca.crt" not found Jul 11 07:56:54.197406 kubelet[2815]: W0711 07:56:54.197315 2815 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4392-0-0-n-91c7dbf1fc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4392-0-0-n-91c7dbf1fc.novalocal' and this object Jul 11 07:56:54.197690 systemd[1]: Created slice kubepods-besteffort-podf7857c99_39c5_4a40_9436_bf752e736697.slice - libcontainer container kubepods-besteffort-podf7857c99_39c5_4a40_9436_bf752e736697.slice. Jul 11 07:56:54.200108 kubelet[2815]: E0711 07:56:54.198284 2815 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4392-0-0-n-91c7dbf1fc.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4392-0-0-n-91c7dbf1fc.novalocal' and this object" logger="UnhandledError" Jul 11 07:56:54.200108 kubelet[2815]: W0711 07:56:54.199097 2815 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4392-0-0-n-91c7dbf1fc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4392-0-0-n-91c7dbf1fc.novalocal' and this object Jul 11 07:56:54.200108 kubelet[2815]: E0711 07:56:54.199150 2815 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4392-0-0-n-91c7dbf1fc.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4392-0-0-n-91c7dbf1fc.novalocal' and this object" logger="UnhandledError" Jul 11 07:56:54.331341 kubelet[2815]: I0711 07:56:54.331146 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2kz\" (UniqueName: \"kubernetes.io/projected/f7857c99-39c5-4a40-9436-bf752e736697-kube-api-access-6f2kz\") pod \"tigera-operator-5bf8dfcb4-7lb7f\" (UID: \"f7857c99-39c5-4a40-9436-bf752e736697\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7lb7f" Jul 11 07:56:54.331753 kubelet[2815]: I0711 07:56:54.331411 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7857c99-39c5-4a40-9436-bf752e736697-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-7lb7f\" (UID: \"f7857c99-39c5-4a40-9436-bf752e736697\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7lb7f" Jul 11 07:56:54.599822 containerd[1555]: time="2025-07-11T07:56:54.599400792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chnq5,Uid:eb870962-7743-4617-9f65-372f95595dd4,Namespace:kube-system,Attempt:0,}" Jul 11 07:56:54.719845 containerd[1555]: time="2025-07-11T07:56:54.719644863Z" level=info msg="connecting to shim 269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8" address="unix:///run/containerd/s/a89d0f42e74771b78401c3153b9623d533c3530d45df89c2e756ced6256fcf28" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:56:54.799316 systemd[1]: Started cri-containerd-269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8.scope - libcontainer container 269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8. Jul 11 07:56:54.866858 containerd[1555]: time="2025-07-11T07:56:54.866626790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chnq5,Uid:eb870962-7743-4617-9f65-372f95595dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8\"" Jul 11 07:56:54.874378 containerd[1555]: time="2025-07-11T07:56:54.874296013Z" level=info msg="CreateContainer within sandbox \"269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 07:56:54.898531 containerd[1555]: time="2025-07-11T07:56:54.897669739Z" level=info msg="Container 40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:56:54.901781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159259664.mount: Deactivated successfully. Jul 11 07:56:54.916170 containerd[1555]: time="2025-07-11T07:56:54.916118009Z" level=info msg="CreateContainer within sandbox \"269501583231b2259fbe5ef1208f120182fd0867ca36400f1448343dc82bc5d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec\"" Jul 11 07:56:54.918141 containerd[1555]: time="2025-07-11T07:56:54.916986252Z" level=info msg="StartContainer for \"40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec\"" Jul 11 07:56:54.920659 containerd[1555]: time="2025-07-11T07:56:54.920606417Z" level=info msg="connecting to shim 40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec" address="unix:///run/containerd/s/a89d0f42e74771b78401c3153b9623d533c3530d45df89c2e756ced6256fcf28" protocol=ttrpc version=3 Jul 11 07:56:54.947480 systemd[1]: Started cri-containerd-40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec.scope - libcontainer container 40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec. Jul 11 07:56:55.018841 containerd[1555]: time="2025-07-11T07:56:55.018521423Z" level=info msg="StartContainer for \"40f8d44f3348a4671b880ddc17b37f585ee5ed8adffb06bee1ccb23021cd16ec\" returns successfully" Jul 11 07:56:55.408741 containerd[1555]: time="2025-07-11T07:56:55.408597114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7lb7f,Uid:f7857c99-39c5-4a40-9436-bf752e736697,Namespace:tigera-operator,Attempt:0,}" Jul 11 07:56:55.465123 containerd[1555]: time="2025-07-11T07:56:55.464593496Z" level=info msg="connecting to shim 45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74" address="unix:///run/containerd/s/408d3036fede483de8410603041ac9c29502da35c846f2de4550cb8445973c5d" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:56:55.542226 kubelet[2815]: I0711 07:56:55.542051 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chnq5" podStartSLOduration=2.5419982819999998 podStartE2EDuration="2.541998282s" podCreationTimestamp="2025-07-11 07:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:56:55.539739907 +0000 UTC m=+6.380150742" watchObservedRunningTime="2025-07-11 07:56:55.541998282 +0000 UTC m=+6.382409127" Jul 11 07:56:55.554701 systemd[1]: Started cri-containerd-45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74.scope - libcontainer container 45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74. Jul 11 07:56:55.626800 containerd[1555]: time="2025-07-11T07:56:55.626732317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7lb7f,Uid:f7857c99-39c5-4a40-9436-bf752e736697,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74\"" Jul 11 07:56:55.632675 containerd[1555]: time="2025-07-11T07:56:55.631264785Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 07:56:57.382440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206784526.mount: Deactivated successfully. Jul 11 07:56:58.649199 containerd[1555]: time="2025-07-11T07:56:58.648055125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:58.651777 containerd[1555]: time="2025-07-11T07:56:58.651744052Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 07:56:58.653371 containerd[1555]: time="2025-07-11T07:56:58.653309743Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:58.657109 containerd[1555]: time="2025-07-11T07:56:58.657032532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:56:58.657954 containerd[1555]: time="2025-07-11T07:56:58.657908867Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.025216062s" Jul 11 07:56:58.658120 containerd[1555]: time="2025-07-11T07:56:58.658091395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 07:56:58.662955 containerd[1555]: time="2025-07-11T07:56:58.662914133Z" level=info msg="CreateContainer within sandbox \"45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 07:56:58.683306 containerd[1555]: time="2025-07-11T07:56:58.683254780Z" level=info msg="Container fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:56:58.693811 containerd[1555]: time="2025-07-11T07:56:58.693769978Z" level=info msg="CreateContainer within sandbox \"45f03ca461a2bd1cb33fb34d4916f84c8c5cc277986ae540397cf3eb2a561b74\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288\"" Jul 11 07:56:58.695712 containerd[1555]: time="2025-07-11T07:56:58.695042457Z" level=info msg="StartContainer for \"fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288\"" Jul 11 07:56:58.697853 containerd[1555]: time="2025-07-11T07:56:58.697812430Z" level=info msg="connecting to shim fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288" address="unix:///run/containerd/s/408d3036fede483de8410603041ac9c29502da35c846f2de4550cb8445973c5d" protocol=ttrpc version=3 Jul 11 07:56:58.736303 systemd[1]: Started cri-containerd-fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288.scope - libcontainer container fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288. Jul 11 07:56:58.788667 containerd[1555]: time="2025-07-11T07:56:58.788574047Z" level=info msg="StartContainer for \"fb9ad55b9ab56c453d6d8b2098531cfe325c8e1ab67742ce20aa615c5b4f5288\" returns successfully" Jul 11 07:56:59.558818 kubelet[2815]: I0711 07:56:59.558439 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-7lb7f" podStartSLOduration=2.528390217 podStartE2EDuration="5.558196903s" podCreationTimestamp="2025-07-11 07:56:54 +0000 UTC" firstStartedPulling="2025-07-11 07:56:55.629843197 +0000 UTC m=+6.470254033" lastFinishedPulling="2025-07-11 07:56:58.659649873 +0000 UTC m=+9.500060719" observedRunningTime="2025-07-11 07:56:59.553036883 +0000 UTC m=+10.393447779" watchObservedRunningTime="2025-07-11 07:56:59.558196903 +0000 UTC m=+10.398607808" Jul 11 07:57:04.330136 sudo[1840]: pam_unix(sudo:session): session closed for user root Jul 11 07:57:04.507360 sshd[1839]: Connection closed by 172.24.4.1 port 58500 Jul 11 07:57:04.511441 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jul 11 07:57:04.530800 systemd[1]: sshd@8-172.24.4.10:22-172.24.4.1:58500.service: Deactivated successfully. Jul 11 07:57:04.546806 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 07:57:04.550058 systemd[1]: session-11.scope: Consumed 8.587s CPU time, 226.3M memory peak. Jul 11 07:57:04.556397 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jul 11 07:57:04.560541 systemd-logind[1529]: Removed session 11. Jul 11 07:57:09.610119 systemd[1]: Created slice kubepods-besteffort-pod9c4c9edf_f65e_4b85_89fa_342a5741aebb.slice - libcontainer container kubepods-besteffort-pod9c4c9edf_f65e_4b85_89fa_342a5741aebb.slice. Jul 11 07:57:09.682595 kubelet[2815]: I0711 07:57:09.682502 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c4c9edf-f65e-4b85-89fa-342a5741aebb-tigera-ca-bundle\") pod \"calico-typha-855588b8c9-6kzcm\" (UID: \"9c4c9edf-f65e-4b85-89fa-342a5741aebb\") " pod="calico-system/calico-typha-855588b8c9-6kzcm" Jul 11 07:57:09.684296 kubelet[2815]: I0711 07:57:09.683614 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c4c9edf-f65e-4b85-89fa-342a5741aebb-typha-certs\") pod \"calico-typha-855588b8c9-6kzcm\" (UID: \"9c4c9edf-f65e-4b85-89fa-342a5741aebb\") " pod="calico-system/calico-typha-855588b8c9-6kzcm" Jul 11 07:57:09.684296 kubelet[2815]: I0711 07:57:09.683661 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpxw\" (UniqueName: \"kubernetes.io/projected/9c4c9edf-f65e-4b85-89fa-342a5741aebb-kube-api-access-6zpxw\") pod \"calico-typha-855588b8c9-6kzcm\" (UID: \"9c4c9edf-f65e-4b85-89fa-342a5741aebb\") " pod="calico-system/calico-typha-855588b8c9-6kzcm" Jul 11 07:57:09.868521 systemd[1]: Created slice kubepods-besteffort-pod5b8550a0_5405_449c_a798_edb16d68ec37.slice - libcontainer container kubepods-besteffort-pod5b8550a0_5405_449c_a798_edb16d68ec37.slice. Jul 11 07:57:09.923801 containerd[1555]: time="2025-07-11T07:57:09.923474631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-855588b8c9-6kzcm,Uid:9c4c9edf-f65e-4b85-89fa-342a5741aebb,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:09.985578 kubelet[2815]: I0711 07:57:09.985509 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-cni-bin-dir\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985741 kubelet[2815]: I0711 07:57:09.985575 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-xtables-lock\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985741 kubelet[2815]: I0711 07:57:09.985686 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-flexvol-driver-host\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985741 kubelet[2815]: I0711 07:57:09.985716 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5b8550a0-5405-449c-a798-edb16d68ec37-node-certs\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985748 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8tz\" (UniqueName: \"kubernetes.io/projected/5b8550a0-5405-449c-a798-edb16d68ec37-kube-api-access-wx8tz\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985773 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-cni-log-dir\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985802 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-var-run-calico\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985839 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-cni-net-dir\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985873 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-var-lib-calico\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.985958 kubelet[2815]: I0711 07:57:09.985904 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-lib-modules\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.987258 kubelet[2815]: I0711 07:57:09.985977 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5b8550a0-5405-449c-a798-edb16d68ec37-policysync\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:09.987258 kubelet[2815]: I0711 07:57:09.986002 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b8550a0-5405-449c-a798-edb16d68ec37-tigera-ca-bundle\") pod \"calico-node-952qm\" (UID: \"5b8550a0-5405-449c-a798-edb16d68ec37\") " pod="calico-system/calico-node-952qm" Jul 11 07:57:10.004879 containerd[1555]: time="2025-07-11T07:57:10.004784783Z" level=info msg="connecting to shim c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e" address="unix:///run/containerd/s/4d66bb53d0e83aeb7d540123d87a41eabb0a14636f8a00df1b6030f37cbe150e" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:10.089602 systemd[1]: Started cri-containerd-c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e.scope - libcontainer container c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e. Jul 11 07:57:10.109954 kubelet[2815]: E0711 07:57:10.109854 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.110121 kubelet[2815]: W0711 07:57:10.109943 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.114967 kubelet[2815]: E0711 07:57:10.110305 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.120690 kubelet[2815]: E0711 07:57:10.119196 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.120690 kubelet[2815]: W0711 07:57:10.119228 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.120690 kubelet[2815]: E0711 07:57:10.119253 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.138410 kubelet[2815]: E0711 07:57:10.138364 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.138665 kubelet[2815]: W0711 07:57:10.138397 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.138750 kubelet[2815]: E0711 07:57:10.138675 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.180539 containerd[1555]: time="2025-07-11T07:57:10.180358053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-952qm,Uid:5b8550a0-5405-449c-a798-edb16d68ec37,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:10.209644 kubelet[2815]: E0711 07:57:10.209237 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:10.248090 containerd[1555]: time="2025-07-11T07:57:10.247916617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-855588b8c9-6kzcm,Uid:9c4c9edf-f65e-4b85-89fa-342a5741aebb,Namespace:calico-system,Attempt:0,} returns sandbox id \"c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e\"" Jul 11 07:57:10.256659 containerd[1555]: time="2025-07-11T07:57:10.256585056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 07:57:10.264751 containerd[1555]: time="2025-07-11T07:57:10.264303222Z" level=info msg="connecting to shim 11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a" address="unix:///run/containerd/s/abadb2d6f685f51d9a4b1f761a64691ab1d91ea54fc49dfd4581b6c1b5b9eff3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:10.294337 kubelet[2815]: E0711 07:57:10.294242 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.294958 kubelet[2815]: W0711 07:57:10.294720 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.294958 kubelet[2815]: E0711 07:57:10.294880 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.296827 kubelet[2815]: E0711 07:57:10.296337 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.296827 kubelet[2815]: W0711 07:57:10.296697 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.296827 kubelet[2815]: E0711 07:57:10.296735 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.298606 kubelet[2815]: E0711 07:57:10.298342 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.298606 kubelet[2815]: W0711 07:57:10.298360 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.298606 kubelet[2815]: E0711 07:57:10.298374 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.299274 kubelet[2815]: E0711 07:57:10.298941 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.299274 kubelet[2815]: W0711 07:57:10.298996 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.299274 kubelet[2815]: E0711 07:57:10.299008 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.299860 kubelet[2815]: E0711 07:57:10.299845 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.300237 kubelet[2815]: W0711 07:57:10.300221 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.300362 kubelet[2815]: E0711 07:57:10.300346 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.300875 kubelet[2815]: E0711 07:57:10.300861 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.301015 kubelet[2815]: W0711 07:57:10.301001 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.301884 kubelet[2815]: E0711 07:57:10.301860 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.303225 kubelet[2815]: E0711 07:57:10.303209 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.305423 kubelet[2815]: W0711 07:57:10.305157 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.305423 kubelet[2815]: E0711 07:57:10.305187 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.307623 kubelet[2815]: E0711 07:57:10.307237 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.307623 kubelet[2815]: W0711 07:57:10.307264 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.307623 kubelet[2815]: E0711 07:57:10.307286 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.308637 kubelet[2815]: E0711 07:57:10.308477 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.308993 kubelet[2815]: W0711 07:57:10.308749 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.308993 kubelet[2815]: E0711 07:57:10.308773 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.309755 kubelet[2815]: E0711 07:57:10.309577 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.309755 kubelet[2815]: W0711 07:57:10.309617 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.309755 kubelet[2815]: E0711 07:57:10.309634 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.310052 kubelet[2815]: E0711 07:57:10.309974 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.310455 kubelet[2815]: W0711 07:57:10.310115 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.310455 kubelet[2815]: E0711 07:57:10.310134 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.310657 systemd[1]: Started cri-containerd-11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a.scope - libcontainer container 11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a. Jul 11 07:57:10.312246 kubelet[2815]: E0711 07:57:10.311905 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.312246 kubelet[2815]: W0711 07:57:10.311923 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.312246 kubelet[2815]: E0711 07:57:10.311940 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.315023 kubelet[2815]: E0711 07:57:10.314879 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.315023 kubelet[2815]: W0711 07:57:10.314904 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.315023 kubelet[2815]: E0711 07:57:10.314926 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.315700 kubelet[2815]: E0711 07:57:10.315557 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.315700 kubelet[2815]: W0711 07:57:10.315570 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.315700 kubelet[2815]: E0711 07:57:10.315582 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.316722 kubelet[2815]: E0711 07:57:10.316457 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.316722 kubelet[2815]: W0711 07:57:10.316478 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.316722 kubelet[2815]: E0711 07:57:10.316492 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.317209 kubelet[2815]: E0711 07:57:10.317140 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.317209 kubelet[2815]: W0711 07:57:10.317154 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.317209 kubelet[2815]: E0711 07:57:10.317166 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.318485 kubelet[2815]: E0711 07:57:10.318450 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.318692 kubelet[2815]: W0711 07:57:10.318580 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.318692 kubelet[2815]: E0711 07:57:10.318596 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.319186 kubelet[2815]: E0711 07:57:10.319015 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.319186 kubelet[2815]: W0711 07:57:10.319027 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.319186 kubelet[2815]: E0711 07:57:10.319038 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.319624 kubelet[2815]: E0711 07:57:10.319610 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.319775 kubelet[2815]: W0711 07:57:10.319759 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.320064 kubelet[2815]: E0711 07:57:10.319851 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.320238 kubelet[2815]: E0711 07:57:10.320204 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.320440 kubelet[2815]: W0711 07:57:10.320336 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.320440 kubelet[2815]: E0711 07:57:10.320352 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.373772 containerd[1555]: time="2025-07-11T07:57:10.373605600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-952qm,Uid:5b8550a0-5405-449c-a798-edb16d68ec37,Namespace:calico-system,Attempt:0,} returns sandbox id \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\"" Jul 11 07:57:10.389884 kubelet[2815]: E0711 07:57:10.389492 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.389884 kubelet[2815]: W0711 07:57:10.389871 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.390225 kubelet[2815]: E0711 07:57:10.389897 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.390622 kubelet[2815]: I0711 07:57:10.390509 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13bef544-0afd-4c69-9e16-c9d26b0ce001-registration-dir\") pod \"csi-node-driver-v6t7t\" (UID: \"13bef544-0afd-4c69-9e16-c9d26b0ce001\") " pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:10.392127 kubelet[2815]: E0711 07:57:10.392050 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.392127 kubelet[2815]: W0711 07:57:10.392106 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.392127 kubelet[2815]: E0711 07:57:10.392145 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.392584 kubelet[2815]: I0711 07:57:10.392171 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13bef544-0afd-4c69-9e16-c9d26b0ce001-varrun\") pod \"csi-node-driver-v6t7t\" (UID: \"13bef544-0afd-4c69-9e16-c9d26b0ce001\") " pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:10.392584 kubelet[2815]: E0711 07:57:10.392331 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.392584 kubelet[2815]: W0711 07:57:10.392342 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.393261 kubelet[2815]: E0711 07:57:10.393215 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.393261 kubelet[2815]: I0711 07:57:10.393251 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8npbc\" (UniqueName: \"kubernetes.io/projected/13bef544-0afd-4c69-9e16-c9d26b0ce001-kube-api-access-8npbc\") pod \"csi-node-driver-v6t7t\" (UID: \"13bef544-0afd-4c69-9e16-c9d26b0ce001\") " pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:10.393565 kubelet[2815]: E0711 07:57:10.393549 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.393565 kubelet[2815]: W0711 07:57:10.393563 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.394296 kubelet[2815]: E0711 07:57:10.394164 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.394296 kubelet[2815]: E0711 07:57:10.394254 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.394296 kubelet[2815]: W0711 07:57:10.394266 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.394533 kubelet[2815]: E0711 07:57:10.394330 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.394819 kubelet[2815]: E0711 07:57:10.394798 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.394819 kubelet[2815]: W0711 07:57:10.394813 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.395120 kubelet[2815]: E0711 07:57:10.394850 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.395120 kubelet[2815]: E0711 07:57:10.395012 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.395120 kubelet[2815]: W0711 07:57:10.395023 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.395387 kubelet[2815]: I0711 07:57:10.395302 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13bef544-0afd-4c69-9e16-c9d26b0ce001-socket-dir\") pod \"csi-node-driver-v6t7t\" (UID: \"13bef544-0afd-4c69-9e16-c9d26b0ce001\") " pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:10.395387 kubelet[2815]: E0711 07:57:10.395340 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.396032 kubelet[2815]: E0711 07:57:10.395671 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.396032 kubelet[2815]: W0711 07:57:10.395684 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.396032 kubelet[2815]: E0711 07:57:10.395695 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.396032 kubelet[2815]: E0711 07:57:10.395853 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.396032 kubelet[2815]: W0711 07:57:10.395862 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.396032 kubelet[2815]: E0711 07:57:10.395872 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.396032 kubelet[2815]: I0711 07:57:10.395892 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13bef544-0afd-4c69-9e16-c9d26b0ce001-kubelet-dir\") pod \"csi-node-driver-v6t7t\" (UID: \"13bef544-0afd-4c69-9e16-c9d26b0ce001\") " pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:10.396465 kubelet[2815]: E0711 07:57:10.396064 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.396465 kubelet[2815]: W0711 07:57:10.396116 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.396465 kubelet[2815]: E0711 07:57:10.396127 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.396465 kubelet[2815]: E0711 07:57:10.396309 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.396465 kubelet[2815]: W0711 07:57:10.396319 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.396465 kubelet[2815]: E0711 07:57:10.396328 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.396465 kubelet[2815]: E0711 07:57:10.396469 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.397338 kubelet[2815]: W0711 07:57:10.396480 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.397338 kubelet[2815]: E0711 07:57:10.396490 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.397338 kubelet[2815]: E0711 07:57:10.397271 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.397338 kubelet[2815]: W0711 07:57:10.397297 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.397338 kubelet[2815]: E0711 07:57:10.397328 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.397551 kubelet[2815]: E0711 07:57:10.397530 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.397611 kubelet[2815]: W0711 07:57:10.397564 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.397611 kubelet[2815]: E0711 07:57:10.397575 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.397793 kubelet[2815]: E0711 07:57:10.397770 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.397793 kubelet[2815]: W0711 07:57:10.397786 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.397793 kubelet[2815]: E0711 07:57:10.397795 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.497855 kubelet[2815]: E0711 07:57:10.497794 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.497855 kubelet[2815]: W0711 07:57:10.497831 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.498199 kubelet[2815]: E0711 07:57:10.497860 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.498596 kubelet[2815]: E0711 07:57:10.498263 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.498596 kubelet[2815]: W0711 07:57:10.498275 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.498596 kubelet[2815]: E0711 07:57:10.498291 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.498748 kubelet[2815]: E0711 07:57:10.498633 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.498748 kubelet[2815]: W0711 07:57:10.498646 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.498748 kubelet[2815]: E0711 07:57:10.498709 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.499156 kubelet[2815]: E0711 07:57:10.499123 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.499156 kubelet[2815]: W0711 07:57:10.499143 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.499279 kubelet[2815]: E0711 07:57:10.499167 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.500130 kubelet[2815]: E0711 07:57:10.500107 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.500130 kubelet[2815]: W0711 07:57:10.500125 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.500236 kubelet[2815]: E0711 07:57:10.500215 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.500447 kubelet[2815]: E0711 07:57:10.500426 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.500447 kubelet[2815]: W0711 07:57:10.500443 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.500611 kubelet[2815]: E0711 07:57:10.500578 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.500776 kubelet[2815]: E0711 07:57:10.500748 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.500776 kubelet[2815]: W0711 07:57:10.500767 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.500968 kubelet[2815]: E0711 07:57:10.500857 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.501629 kubelet[2815]: E0711 07:57:10.501531 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.501629 kubelet[2815]: W0711 07:57:10.501551 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.501758 kubelet[2815]: E0711 07:57:10.501649 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.502058 kubelet[2815]: E0711 07:57:10.502003 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.502058 kubelet[2815]: W0711 07:57:10.502020 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.502209 kubelet[2815]: E0711 07:57:10.502112 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.502420 kubelet[2815]: E0711 07:57:10.502386 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.502420 kubelet[2815]: W0711 07:57:10.502402 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.502514 kubelet[2815]: E0711 07:57:10.502466 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.502733 kubelet[2815]: E0711 07:57:10.502662 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.502733 kubelet[2815]: W0711 07:57:10.502675 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.502848 kubelet[2815]: E0711 07:57:10.502739 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.502928 kubelet[2815]: E0711 07:57:10.502904 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.502928 kubelet[2815]: W0711 07:57:10.502920 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.503297 kubelet[2815]: E0711 07:57:10.503025 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.503350 kubelet[2815]: E0711 07:57:10.503319 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.503350 kubelet[2815]: W0711 07:57:10.503330 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.504375 kubelet[2815]: E0711 07:57:10.504292 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.504843 kubelet[2815]: E0711 07:57:10.504821 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.504950 kubelet[2815]: W0711 07:57:10.504910 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.505218 kubelet[2815]: E0711 07:57:10.505193 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.505835 kubelet[2815]: E0711 07:57:10.505777 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.505835 kubelet[2815]: W0711 07:57:10.505803 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.506241 kubelet[2815]: E0711 07:57:10.506162 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.506589 kubelet[2815]: E0711 07:57:10.506498 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.506589 kubelet[2815]: W0711 07:57:10.506547 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.506741 kubelet[2815]: E0711 07:57:10.506598 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.507106 kubelet[2815]: E0711 07:57:10.507053 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.507196 kubelet[2815]: W0711 07:57:10.507067 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.507387 kubelet[2815]: E0711 07:57:10.507293 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.507526 kubelet[2815]: E0711 07:57:10.507511 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.507611 kubelet[2815]: W0711 07:57:10.507597 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.507824 kubelet[2815]: E0711 07:57:10.507759 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.507982 kubelet[2815]: E0711 07:57:10.507968 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.508057 kubelet[2815]: W0711 07:57:10.508043 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.508246 kubelet[2815]: E0711 07:57:10.508215 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.508600 kubelet[2815]: E0711 07:57:10.508488 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.508600 kubelet[2815]: W0711 07:57:10.508503 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.508600 kubelet[2815]: E0711 07:57:10.508534 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.509041 kubelet[2815]: E0711 07:57:10.508938 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.509041 kubelet[2815]: W0711 07:57:10.508953 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.509041 kubelet[2815]: E0711 07:57:10.508987 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.509416 kubelet[2815]: E0711 07:57:10.509402 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.509635 kubelet[2815]: W0711 07:57:10.509485 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.509635 kubelet[2815]: E0711 07:57:10.509533 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.509938 kubelet[2815]: E0711 07:57:10.509847 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.509938 kubelet[2815]: W0711 07:57:10.509862 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.509938 kubelet[2815]: E0711 07:57:10.509903 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.510424 kubelet[2815]: E0711 07:57:10.510320 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.510424 kubelet[2815]: W0711 07:57:10.510335 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.510630 kubelet[2815]: E0711 07:57:10.510615 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.510720 kubelet[2815]: W0711 07:57:10.510706 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.510896 kubelet[2815]: E0711 07:57:10.510632 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.510986 kubelet[2815]: E0711 07:57:10.510971 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:10.526742 kubelet[2815]: E0711 07:57:10.526588 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:10.526742 kubelet[2815]: W0711 07:57:10.526623 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:10.526742 kubelet[2815]: E0711 07:57:10.526665 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:12.297463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3959771107.mount: Deactivated successfully. Jul 11 07:57:12.392689 kubelet[2815]: E0711 07:57:12.392608 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:14.334620 containerd[1555]: time="2025-07-11T07:57:14.334010697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:14.336585 containerd[1555]: time="2025-07-11T07:57:14.336223339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 07:57:14.340297 containerd[1555]: time="2025-07-11T07:57:14.340179137Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:14.348127 containerd[1555]: time="2025-07-11T07:57:14.347723687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:14.349717 containerd[1555]: time="2025-07-11T07:57:14.349584082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.092917484s" Jul 11 07:57:14.349717 containerd[1555]: time="2025-07-11T07:57:14.349684198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 07:57:14.354509 containerd[1555]: time="2025-07-11T07:57:14.354361674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 07:57:14.390105 containerd[1555]: time="2025-07-11T07:57:14.389253187Z" level=info msg="CreateContainer within sandbox \"c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 07:57:14.393333 kubelet[2815]: E0711 07:57:14.393275 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:14.411872 containerd[1555]: time="2025-07-11T07:57:14.410308459Z" level=info msg="Container 88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:14.427863 containerd[1555]: time="2025-07-11T07:57:14.427776593Z" level=info msg="CreateContainer within sandbox \"c10545c24a91629fcd7ff98797daa3adfcf0462b1853f2bb13b9ccc0cc562e9e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b\"" Jul 11 07:57:14.430429 containerd[1555]: time="2025-07-11T07:57:14.430376298Z" level=info msg="StartContainer for \"88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b\"" Jul 11 07:57:14.433459 containerd[1555]: time="2025-07-11T07:57:14.433412328Z" level=info msg="connecting to shim 88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b" address="unix:///run/containerd/s/4d66bb53d0e83aeb7d540123d87a41eabb0a14636f8a00df1b6030f37cbe150e" protocol=ttrpc version=3 Jul 11 07:57:14.469363 systemd[1]: Started cri-containerd-88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b.scope - libcontainer container 88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b. Jul 11 07:57:14.565773 containerd[1555]: time="2025-07-11T07:57:14.565682044Z" level=info msg="StartContainer for \"88d850dd28a874a68b1dff3b2acb0b6a0921fbb819a888f07084acfe250dd88b\" returns successfully" Jul 11 07:57:14.657560 kubelet[2815]: E0711 07:57:14.657271 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.657560 kubelet[2815]: W0711 07:57:14.657308 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.657560 kubelet[2815]: E0711 07:57:14.657336 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.659150 kubelet[2815]: E0711 07:57:14.658887 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.659150 kubelet[2815]: W0711 07:57:14.659023 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.659150 kubelet[2815]: E0711 07:57:14.659052 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.659998 kubelet[2815]: E0711 07:57:14.659794 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.659998 kubelet[2815]: W0711 07:57:14.659912 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.659998 kubelet[2815]: E0711 07:57:14.659935 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.660750 kubelet[2815]: E0711 07:57:14.660646 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.660750 kubelet[2815]: W0711 07:57:14.660662 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.660750 kubelet[2815]: E0711 07:57:14.660673 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.661420 kubelet[2815]: E0711 07:57:14.661321 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.661420 kubelet[2815]: W0711 07:57:14.661335 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.661420 kubelet[2815]: E0711 07:57:14.661347 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.661962 kubelet[2815]: E0711 07:57:14.661898 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.661962 kubelet[2815]: W0711 07:57:14.661912 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.661962 kubelet[2815]: E0711 07:57:14.661923 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.662356 kubelet[2815]: E0711 07:57:14.662250 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.662356 kubelet[2815]: W0711 07:57:14.662264 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.662356 kubelet[2815]: E0711 07:57:14.662275 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.662771 kubelet[2815]: E0711 07:57:14.662654 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.662771 kubelet[2815]: W0711 07:57:14.662693 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.662771 kubelet[2815]: E0711 07:57:14.662717 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.663239 kubelet[2815]: E0711 07:57:14.663173 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.663239 kubelet[2815]: W0711 07:57:14.663187 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.663239 kubelet[2815]: E0711 07:57:14.663198 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.663655 kubelet[2815]: E0711 07:57:14.663591 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.663655 kubelet[2815]: W0711 07:57:14.663605 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.663655 kubelet[2815]: E0711 07:57:14.663615 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.663983 kubelet[2815]: E0711 07:57:14.663892 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.663983 kubelet[2815]: W0711 07:57:14.663905 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.663983 kubelet[2815]: E0711 07:57:14.663916 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.664443 kubelet[2815]: E0711 07:57:14.664372 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.664443 kubelet[2815]: W0711 07:57:14.664386 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.664443 kubelet[2815]: E0711 07:57:14.664397 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.664842 kubelet[2815]: E0711 07:57:14.664772 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.664842 kubelet[2815]: W0711 07:57:14.664785 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.664842 kubelet[2815]: E0711 07:57:14.664796 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.665154 kubelet[2815]: E0711 07:57:14.665064 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.665154 kubelet[2815]: W0711 07:57:14.665098 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.665154 kubelet[2815]: E0711 07:57:14.665109 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.666117 kubelet[2815]: E0711 07:57:14.666047 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.666117 kubelet[2815]: W0711 07:57:14.666062 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.666670 kubelet[2815]: E0711 07:57:14.666653 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.739358 kubelet[2815]: E0711 07:57:14.739297 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.739358 kubelet[2815]: W0711 07:57:14.739328 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.739776 kubelet[2815]: E0711 07:57:14.739373 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.739776 kubelet[2815]: E0711 07:57:14.739724 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.739776 kubelet[2815]: W0711 07:57:14.739736 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.739776 kubelet[2815]: E0711 07:57:14.739747 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.739952 kubelet[2815]: E0711 07:57:14.739919 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.739952 kubelet[2815]: W0711 07:57:14.739929 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.739952 kubelet[2815]: E0711 07:57:14.739940 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.740347 kubelet[2815]: E0711 07:57:14.740297 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.740347 kubelet[2815]: W0711 07:57:14.740313 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.740650 kubelet[2815]: E0711 07:57:14.740615 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.741117 kubelet[2815]: E0711 07:57:14.740947 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.741117 kubelet[2815]: W0711 07:57:14.740983 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.741117 kubelet[2815]: E0711 07:57:14.741022 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.741381 kubelet[2815]: E0711 07:57:14.741368 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.741537 kubelet[2815]: W0711 07:57:14.741470 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.741537 kubelet[2815]: E0711 07:57:14.741511 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.742136 kubelet[2815]: E0711 07:57:14.742109 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.742136 kubelet[2815]: W0711 07:57:14.742128 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.742365 kubelet[2815]: E0711 07:57:14.742149 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.742519 kubelet[2815]: E0711 07:57:14.742496 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.743165 kubelet[2815]: W0711 07:57:14.743125 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.743300 kubelet[2815]: E0711 07:57:14.743238 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.743385 kubelet[2815]: E0711 07:57:14.743358 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.743385 kubelet[2815]: W0711 07:57:14.743377 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.743415 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.743565 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.744194 kubelet[2815]: W0711 07:57:14.743576 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.743735 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.744194 kubelet[2815]: W0711 07:57:14.743745 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.743756 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.744010 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.744194 kubelet[2815]: W0711 07:57:14.744021 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.744194 kubelet[2815]: E0711 07:57:14.744031 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.745027 kubelet[2815]: E0711 07:57:14.744838 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.745027 kubelet[2815]: W0711 07:57:14.744860 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.745027 kubelet[2815]: E0711 07:57:14.744878 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.745299 kubelet[2815]: E0711 07:57:14.745283 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.745378 kubelet[2815]: W0711 07:57:14.745363 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.745525 kubelet[2815]: E0711 07:57:14.745457 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.746404 kubelet[2815]: E0711 07:57:14.745842 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.746404 kubelet[2815]: W0711 07:57:14.745862 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.746404 kubelet[2815]: E0711 07:57:14.745878 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.746404 kubelet[2815]: E0711 07:57:14.745945 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.746617 kubelet[2815]: E0711 07:57:14.746438 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.746617 kubelet[2815]: W0711 07:57:14.746450 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.746617 kubelet[2815]: E0711 07:57:14.746462 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.747220 kubelet[2815]: E0711 07:57:14.747192 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.747220 kubelet[2815]: W0711 07:57:14.747207 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.747220 kubelet[2815]: E0711 07:57:14.747226 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:14.748234 kubelet[2815]: E0711 07:57:14.748207 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:14.748234 kubelet[2815]: W0711 07:57:14.748226 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:14.748366 kubelet[2815]: E0711 07:57:14.748238 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.624647 kubelet[2815]: I0711 07:57:15.624545 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:57:15.678736 kubelet[2815]: E0711 07:57:15.678515 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.678736 kubelet[2815]: W0711 07:57:15.678570 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.678736 kubelet[2815]: E0711 07:57:15.678646 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.679542 kubelet[2815]: E0711 07:57:15.679491 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.679542 kubelet[2815]: W0711 07:57:15.679527 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.679813 kubelet[2815]: E0711 07:57:15.679554 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.679946 kubelet[2815]: E0711 07:57:15.679896 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.679946 kubelet[2815]: W0711 07:57:15.679931 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.680307 kubelet[2815]: E0711 07:57:15.679956 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.680467 kubelet[2815]: E0711 07:57:15.680349 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.680467 kubelet[2815]: W0711 07:57:15.680374 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.680467 kubelet[2815]: E0711 07:57:15.680397 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.680845 kubelet[2815]: E0711 07:57:15.680743 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.680845 kubelet[2815]: W0711 07:57:15.680766 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.680845 kubelet[2815]: E0711 07:57:15.680788 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.681389 kubelet[2815]: E0711 07:57:15.681154 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.681389 kubelet[2815]: W0711 07:57:15.681179 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.681389 kubelet[2815]: E0711 07:57:15.681202 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.681868 kubelet[2815]: E0711 07:57:15.681509 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.681868 kubelet[2815]: W0711 07:57:15.681532 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.681868 kubelet[2815]: E0711 07:57:15.681554 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.682449 kubelet[2815]: E0711 07:57:15.681909 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.682449 kubelet[2815]: W0711 07:57:15.681933 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.682449 kubelet[2815]: E0711 07:57:15.681956 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.682449 kubelet[2815]: E0711 07:57:15.682348 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.682449 kubelet[2815]: W0711 07:57:15.682371 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.682449 kubelet[2815]: E0711 07:57:15.682394 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.683158 kubelet[2815]: E0711 07:57:15.682699 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.683158 kubelet[2815]: W0711 07:57:15.682738 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.683158 kubelet[2815]: E0711 07:57:15.682762 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.683158 kubelet[2815]: E0711 07:57:15.683120 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.683158 kubelet[2815]: W0711 07:57:15.683145 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.683158 kubelet[2815]: E0711 07:57:15.683167 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.683994 kubelet[2815]: E0711 07:57:15.683467 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.683994 kubelet[2815]: W0711 07:57:15.683491 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.683994 kubelet[2815]: E0711 07:57:15.683537 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.683994 kubelet[2815]: E0711 07:57:15.683849 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.683994 kubelet[2815]: W0711 07:57:15.683872 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.684786 kubelet[2815]: E0711 07:57:15.684008 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.684786 kubelet[2815]: E0711 07:57:15.684409 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.684786 kubelet[2815]: W0711 07:57:15.684434 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.684786 kubelet[2815]: E0711 07:57:15.684456 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.684786 kubelet[2815]: E0711 07:57:15.684761 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.684786 kubelet[2815]: W0711 07:57:15.684784 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.686438 kubelet[2815]: E0711 07:57:15.684806 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.747289 kubelet[2815]: E0711 07:57:15.747208 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.747289 kubelet[2815]: W0711 07:57:15.747256 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.747289 kubelet[2815]: E0711 07:57:15.747293 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.747809 kubelet[2815]: E0711 07:57:15.747749 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.747809 kubelet[2815]: W0711 07:57:15.747788 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.748031 kubelet[2815]: E0711 07:57:15.747851 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.748439 kubelet[2815]: E0711 07:57:15.748376 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.748439 kubelet[2815]: W0711 07:57:15.748416 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.748439 kubelet[2815]: E0711 07:57:15.748453 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.748940 kubelet[2815]: E0711 07:57:15.748815 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.748940 kubelet[2815]: W0711 07:57:15.748841 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.748940 kubelet[2815]: E0711 07:57:15.748886 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.749538 kubelet[2815]: E0711 07:57:15.749256 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.749538 kubelet[2815]: W0711 07:57:15.749281 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.749538 kubelet[2815]: E0711 07:57:15.749344 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.750027 kubelet[2815]: E0711 07:57:15.749755 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.750027 kubelet[2815]: W0711 07:57:15.749779 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.750027 kubelet[2815]: E0711 07:57:15.749818 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.750653 kubelet[2815]: E0711 07:57:15.750267 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.750653 kubelet[2815]: W0711 07:57:15.750291 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.750653 kubelet[2815]: E0711 07:57:15.750406 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.750653 kubelet[2815]: E0711 07:57:15.750639 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.750653 kubelet[2815]: W0711 07:57:15.750663 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.750742 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751136 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.752164 kubelet[2815]: W0711 07:57:15.751162 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751287 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751491 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.752164 kubelet[2815]: W0711 07:57:15.751514 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751564 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751867 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.752164 kubelet[2815]: W0711 07:57:15.751890 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.752164 kubelet[2815]: E0711 07:57:15.751915 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.752854 kubelet[2815]: E0711 07:57:15.752412 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.752854 kubelet[2815]: W0711 07:57:15.752437 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.752854 kubelet[2815]: E0711 07:57:15.752461 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.753404 kubelet[2815]: E0711 07:57:15.753133 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.753404 kubelet[2815]: W0711 07:57:15.753161 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.753404 kubelet[2815]: E0711 07:57:15.753269 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.753774 kubelet[2815]: E0711 07:57:15.753454 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.753774 kubelet[2815]: W0711 07:57:15.753479 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.753774 kubelet[2815]: E0711 07:57:15.753643 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.753997 kubelet[2815]: E0711 07:57:15.753925 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.753997 kubelet[2815]: W0711 07:57:15.753948 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.754581 kubelet[2815]: E0711 07:57:15.754008 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.754581 kubelet[2815]: E0711 07:57:15.754440 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.754581 kubelet[2815]: W0711 07:57:15.754467 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.754581 kubelet[2815]: E0711 07:57:15.754490 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.754953 kubelet[2815]: E0711 07:57:15.754874 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.754953 kubelet[2815]: W0711 07:57:15.754914 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.754953 kubelet[2815]: E0711 07:57:15.754937 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:15.756020 kubelet[2815]: E0711 07:57:15.755950 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:57:15.756020 kubelet[2815]: W0711 07:57:15.755995 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:57:15.756308 kubelet[2815]: E0711 07:57:15.756027 2815 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:57:16.393868 kubelet[2815]: E0711 07:57:16.392964 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:16.520645 containerd[1555]: time="2025-07-11T07:57:16.520537344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:16.522463 containerd[1555]: time="2025-07-11T07:57:16.522222283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 07:57:16.523861 containerd[1555]: time="2025-07-11T07:57:16.523812744Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:16.528052 containerd[1555]: time="2025-07-11T07:57:16.528012591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:16.528893 containerd[1555]: time="2025-07-11T07:57:16.528854956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.17444473s" Jul 11 07:57:16.528995 containerd[1555]: time="2025-07-11T07:57:16.528976764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 07:57:16.534932 containerd[1555]: time="2025-07-11T07:57:16.534344884Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 07:57:16.552297 containerd[1555]: time="2025-07-11T07:57:16.552252796Z" level=info msg="Container 713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:16.579801 containerd[1555]: time="2025-07-11T07:57:16.579758942Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\"" Jul 11 07:57:16.581132 containerd[1555]: time="2025-07-11T07:57:16.581029737Z" level=info msg="StartContainer for \"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\"" Jul 11 07:57:16.589181 containerd[1555]: time="2025-07-11T07:57:16.589101820Z" level=info msg="connecting to shim 713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7" address="unix:///run/containerd/s/abadb2d6f685f51d9a4b1f761a64691ab1d91ea54fc49dfd4581b6c1b5b9eff3" protocol=ttrpc version=3 Jul 11 07:57:16.637297 systemd[1]: Started cri-containerd-713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7.scope - libcontainer container 713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7. Jul 11 07:57:16.721137 containerd[1555]: time="2025-07-11T07:57:16.721056914Z" level=info msg="StartContainer for \"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\" returns successfully" Jul 11 07:57:16.733723 systemd[1]: cri-containerd-713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7.scope: Deactivated successfully. Jul 11 07:57:16.739727 containerd[1555]: time="2025-07-11T07:57:16.739683719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\" id:\"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\" pid:3515 exited_at:{seconds:1752220636 nanos:738154723}" Jul 11 07:57:16.739896 containerd[1555]: time="2025-07-11T07:57:16.739719967Z" level=info msg="received exit event container_id:\"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\" id:\"713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7\" pid:3515 exited_at:{seconds:1752220636 nanos:738154723}" Jul 11 07:57:16.775778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-713793ab69a3084035609a18373bb6e1fdba237fe883ff09e96f8222e8c71fe7-rootfs.mount: Deactivated successfully. Jul 11 07:57:17.464165 kubelet[2815]: I0711 07:57:17.464019 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:57:17.591701 kubelet[2815]: I0711 07:57:17.591398 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-855588b8c9-6kzcm" podStartSLOduration=4.494569472 podStartE2EDuration="8.591284314s" podCreationTimestamp="2025-07-11 07:57:09 +0000 UTC" firstStartedPulling="2025-07-11 07:57:10.255337419 +0000 UTC m=+21.095748254" lastFinishedPulling="2025-07-11 07:57:14.352052261 +0000 UTC m=+25.192463096" observedRunningTime="2025-07-11 07:57:14.647967038 +0000 UTC m=+25.488377873" watchObservedRunningTime="2025-07-11 07:57:17.591284314 +0000 UTC m=+28.431695269" Jul 11 07:57:18.394668 kubelet[2815]: E0711 07:57:18.394436 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:18.675504 containerd[1555]: time="2025-07-11T07:57:18.675019129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 07:57:20.393982 kubelet[2815]: E0711 07:57:20.393849 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:22.394196 kubelet[2815]: E0711 07:57:22.393545 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:24.392555 kubelet[2815]: E0711 07:57:24.392466 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:25.591758 containerd[1555]: time="2025-07-11T07:57:25.591561269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:25.593765 containerd[1555]: time="2025-07-11T07:57:25.593727330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 07:57:25.596170 containerd[1555]: time="2025-07-11T07:57:25.596006790Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:25.605099 containerd[1555]: time="2025-07-11T07:57:25.603375795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:25.607512 containerd[1555]: time="2025-07-11T07:57:25.607434602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.932243662s" Jul 11 07:57:25.607601 containerd[1555]: time="2025-07-11T07:57:25.607573694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 07:57:25.623582 containerd[1555]: time="2025-07-11T07:57:25.623530615Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 07:57:25.652355 containerd[1555]: time="2025-07-11T07:57:25.652270465Z" level=info msg="Container 467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:25.689102 containerd[1555]: time="2025-07-11T07:57:25.688485166Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\"" Jul 11 07:57:25.697703 containerd[1555]: time="2025-07-11T07:57:25.697645953Z" level=info msg="StartContainer for \"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\"" Jul 11 07:57:25.704943 containerd[1555]: time="2025-07-11T07:57:25.704886047Z" level=info msg="connecting to shim 467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296" address="unix:///run/containerd/s/abadb2d6f685f51d9a4b1f761a64691ab1d91ea54fc49dfd4581b6c1b5b9eff3" protocol=ttrpc version=3 Jul 11 07:57:25.915390 systemd[1]: Started cri-containerd-467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296.scope - libcontainer container 467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296. Jul 11 07:57:26.021172 containerd[1555]: time="2025-07-11T07:57:26.021112421Z" level=info msg="StartContainer for \"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\" returns successfully" Jul 11 07:57:26.425130 kubelet[2815]: E0711 07:57:26.395138 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:28.394462 kubelet[2815]: E0711 07:57:28.394167 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:29.142640 systemd[1]: cri-containerd-467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296.scope: Deactivated successfully. Jul 11 07:57:29.146497 systemd[1]: cri-containerd-467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296.scope: Consumed 2.327s CPU time, 190.6M memory peak, 171.2M written to disk. Jul 11 07:57:29.158894 containerd[1555]: time="2025-07-11T07:57:29.158024133Z" level=info msg="received exit event container_id:\"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\" id:\"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\" pid:3579 exited_at:{seconds:1752220649 nanos:155509286}" Jul 11 07:57:29.161525 containerd[1555]: time="2025-07-11T07:57:29.160110948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\" id:\"467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296\" pid:3579 exited_at:{seconds:1752220649 nanos:155509286}" Jul 11 07:57:29.184524 kubelet[2815]: I0711 07:57:29.184410 2815 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 07:57:29.284835 systemd[1]: Created slice kubepods-burstable-pod5e2be15d_a767_4204_853f_1418d02bf473.slice - libcontainer container kubepods-burstable-pod5e2be15d_a767_4204_853f_1418d02bf473.slice. Jul 11 07:57:29.310739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-467a2a45e46bf8cac758d5264e3ea7b9d09c168187b544201f34695bec931296-rootfs.mount: Deactivated successfully. Jul 11 07:57:29.329278 kubelet[2815]: I0711 07:57:29.329201 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e2be15d-a767-4204-853f-1418d02bf473-config-volume\") pod \"coredns-7c65d6cfc9-kcdps\" (UID: \"5e2be15d-a767-4204-853f-1418d02bf473\") " pod="kube-system/coredns-7c65d6cfc9-kcdps" Jul 11 07:57:29.423497 kubelet[2815]: I0711 07:57:29.329466 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56m7p\" (UniqueName: \"kubernetes.io/projected/5e2be15d-a767-4204-853f-1418d02bf473-kube-api-access-56m7p\") pod \"coredns-7c65d6cfc9-kcdps\" (UID: \"5e2be15d-a767-4204-853f-1418d02bf473\") " pod="kube-system/coredns-7c65d6cfc9-kcdps" Jul 11 07:57:29.366689 systemd[1]: Created slice kubepods-burstable-pod71041969_ec0e_4843_8da4_8e08c776b239.slice - libcontainer container kubepods-burstable-pod71041969_ec0e_4843_8da4_8e08c776b239.slice. Jul 11 07:57:29.379237 systemd[1]: Created slice kubepods-besteffort-podf5b380d0_0a39_4f70_a5a6_a5522aba6ee1.slice - libcontainer container kubepods-besteffort-podf5b380d0_0a39_4f70_a5a6_a5522aba6ee1.slice. Jul 11 07:57:29.392636 systemd[1]: Created slice kubepods-besteffort-pode1ab0fc3_e967_489e_9eee_4deff0ad5164.slice - libcontainer container kubepods-besteffort-pode1ab0fc3_e967_489e_9eee_4deff0ad5164.slice. Jul 11 07:57:29.403523 systemd[1]: Created slice kubepods-besteffort-pod5b2c28a6_7bbc_442a_bf7b_b9d5fc3dad7b.slice - libcontainer container kubepods-besteffort-pod5b2c28a6_7bbc_442a_bf7b_b9d5fc3dad7b.slice. Jul 11 07:57:29.414779 systemd[1]: Created slice kubepods-besteffort-pod5a8ad1f1_2ef2_47cc_bb23_9b286c4c9212.slice - libcontainer container kubepods-besteffort-pod5a8ad1f1_2ef2_47cc_bb23_9b286c4c9212.slice. Jul 11 07:57:29.429625 systemd[1]: Created slice kubepods-besteffort-pod970c5f99_d983_4086_a6e8_ffa128e9fa8a.slice - libcontainer container kubepods-besteffort-pod970c5f99_d983_4086_a6e8_ffa128e9fa8a.slice. Jul 11 07:57:29.433105 kubelet[2815]: I0711 07:57:29.432314 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhqb\" (UniqueName: \"kubernetes.io/projected/f5b380d0-0a39-4f70-a5a6-a5522aba6ee1-kube-api-access-qbhqb\") pod \"calico-kube-controllers-6f595d99cc-dlqmb\" (UID: \"f5b380d0-0a39-4f70-a5a6-a5522aba6ee1\") " pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" Jul 11 07:57:29.433105 kubelet[2815]: I0711 07:57:29.432366 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp427\" (UniqueName: \"kubernetes.io/projected/71041969-ec0e-4843-8da4-8e08c776b239-kube-api-access-kp427\") pod \"coredns-7c65d6cfc9-2gpzk\" (UID: \"71041969-ec0e-4843-8da4-8e08c776b239\") " pod="kube-system/coredns-7c65d6cfc9-2gpzk" Jul 11 07:57:29.433105 kubelet[2815]: I0711 07:57:29.432390 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttqq\" (UniqueName: \"kubernetes.io/projected/e1ab0fc3-e967-489e-9eee-4deff0ad5164-kube-api-access-cttqq\") pod \"goldmane-58fd7646b9-bfvnk\" (UID: \"e1ab0fc3-e967-489e-9eee-4deff0ad5164\") " pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:29.433105 kubelet[2815]: I0711 07:57:29.432442 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-backend-key-pair\") pod \"whisker-55cb46c9d6-qjfxm\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " pod="calico-system/whisker-55cb46c9d6-qjfxm" Jul 11 07:57:29.433105 kubelet[2815]: I0711 07:57:29.432463 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm8b\" (UniqueName: \"kubernetes.io/projected/970c5f99-d983-4086-a6e8-ffa128e9fa8a-kube-api-access-sqm8b\") pod \"whisker-55cb46c9d6-qjfxm\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " pod="calico-system/whisker-55cb46c9d6-qjfxm" Jul 11 07:57:29.433395 kubelet[2815]: I0711 07:57:29.432493 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b-calico-apiserver-certs\") pod \"calico-apiserver-5f8586745b-56tns\" (UID: \"5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b\") " pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" Jul 11 07:57:29.433395 kubelet[2815]: I0711 07:57:29.433294 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71041969-ec0e-4843-8da4-8e08c776b239-config-volume\") pod \"coredns-7c65d6cfc9-2gpzk\" (UID: \"71041969-ec0e-4843-8da4-8e08c776b239\") " pod="kube-system/coredns-7c65d6cfc9-2gpzk" Jul 11 07:57:29.433934 kubelet[2815]: I0711 07:57:29.433907 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ts8\" (UniqueName: \"kubernetes.io/projected/5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b-kube-api-access-n2ts8\") pod \"calico-apiserver-5f8586745b-56tns\" (UID: \"5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b\") " pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" Jul 11 07:57:29.434097 kubelet[2815]: I0711 07:57:29.434051 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212-calico-apiserver-certs\") pod \"calico-apiserver-5f8586745b-b7nf2\" (UID: \"5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212\") " pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:29.434237 kubelet[2815]: I0711 07:57:29.434216 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvbt\" (UniqueName: \"kubernetes.io/projected/5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212-kube-api-access-4tvbt\") pod \"calico-apiserver-5f8586745b-b7nf2\" (UID: \"5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212\") " pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:29.434339 kubelet[2815]: I0711 07:57:29.434324 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5b380d0-0a39-4f70-a5a6-a5522aba6ee1-tigera-ca-bundle\") pod \"calico-kube-controllers-6f595d99cc-dlqmb\" (UID: \"f5b380d0-0a39-4f70-a5a6-a5522aba6ee1\") " pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" Jul 11 07:57:29.434450 kubelet[2815]: I0711 07:57:29.434424 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ab0fc3-e967-489e-9eee-4deff0ad5164-config\") pod \"goldmane-58fd7646b9-bfvnk\" (UID: \"e1ab0fc3-e967-489e-9eee-4deff0ad5164\") " pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:29.434538 kubelet[2815]: I0711 07:57:29.434523 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1ab0fc3-e967-489e-9eee-4deff0ad5164-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-bfvnk\" (UID: \"e1ab0fc3-e967-489e-9eee-4deff0ad5164\") " pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:29.434652 kubelet[2815]: I0711 07:57:29.434629 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e1ab0fc3-e967-489e-9eee-4deff0ad5164-goldmane-key-pair\") pod \"goldmane-58fd7646b9-bfvnk\" (UID: \"e1ab0fc3-e967-489e-9eee-4deff0ad5164\") " pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:29.434751 kubelet[2815]: I0711 07:57:29.434736 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-ca-bundle\") pod \"whisker-55cb46c9d6-qjfxm\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " pod="calico-system/whisker-55cb46c9d6-qjfxm" Jul 11 07:57:29.733377 containerd[1555]: time="2025-07-11T07:57:29.732995355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:57:29.734134 containerd[1555]: time="2025-07-11T07:57:29.733011558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-56tns,Uid:5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:57:29.735920 containerd[1555]: time="2025-07-11T07:57:29.735657889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcdps,Uid:5e2be15d-a767-4204-853f-1418d02bf473,Namespace:kube-system,Attempt:0,}" Jul 11 07:57:29.736337 containerd[1555]: time="2025-07-11T07:57:29.736302506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f595d99cc-dlqmb,Uid:f5b380d0-0a39-4f70-a5a6-a5522aba6ee1,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:29.737594 containerd[1555]: time="2025-07-11T07:57:29.736677829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bfvnk,Uid:e1ab0fc3-e967-489e-9eee-4deff0ad5164,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:29.737839 containerd[1555]: time="2025-07-11T07:57:29.737811860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gpzk,Uid:71041969-ec0e-4843-8da4-8e08c776b239,Namespace:kube-system,Attempt:0,}" Jul 11 07:57:29.741796 containerd[1555]: time="2025-07-11T07:57:29.741760329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55cb46c9d6-qjfxm,Uid:970c5f99-d983-4086-a6e8-ffa128e9fa8a,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:29.817370 containerd[1555]: time="2025-07-11T07:57:29.817299631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 07:57:30.115978 containerd[1555]: time="2025-07-11T07:57:30.115525982Z" level=error msg="Failed to destroy network for sandbox \"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.115978 containerd[1555]: time="2025-07-11T07:57:30.115841394Z" level=error msg="Failed to destroy network for sandbox \"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.122105 containerd[1555]: time="2025-07-11T07:57:30.122010427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.123263 kubelet[2815]: E0711 07:57:30.123164 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.123828 containerd[1555]: time="2025-07-11T07:57:30.123746751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f595d99cc-dlqmb,Uid:f5b380d0-0a39-4f70-a5a6-a5522aba6ee1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.124057 kubelet[2815]: E0711 07:57:30.124020 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.125125 kubelet[2815]: E0711 07:57:30.124329 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" Jul 11 07:57:30.125125 kubelet[2815]: E0711 07:57:30.124397 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" Jul 11 07:57:30.125321 kubelet[2815]: E0711 07:57:30.125251 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f595d99cc-dlqmb_calico-system(f5b380d0-0a39-4f70-a5a6-a5522aba6ee1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f595d99cc-dlqmb_calico-system(f5b380d0-0a39-4f70-a5a6-a5522aba6ee1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85154bd81557d1aa53d7e15db5ba82d8e7572b57b4e35a33ec5377208a2fce7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" podUID="f5b380d0-0a39-4f70-a5a6-a5522aba6ee1" Jul 11 07:57:30.131413 kubelet[2815]: E0711 07:57:30.131341 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:30.131813 kubelet[2815]: E0711 07:57:30.131635 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:30.131960 kubelet[2815]: E0711 07:57:30.131908 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f8586745b-b7nf2_calico-apiserver(5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f8586745b-b7nf2_calico-apiserver(5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93f111804a354e52cda9d35efcfea1d877ef58f28dd254f90f84231e0f2af4d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" podUID="5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212" Jul 11 07:57:30.143552 containerd[1555]: time="2025-07-11T07:57:30.143359546Z" level=error msg="Failed to destroy network for sandbox \"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.144952 containerd[1555]: time="2025-07-11T07:57:30.144291787Z" level=error msg="Failed to destroy network for sandbox \"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.145269 containerd[1555]: time="2025-07-11T07:57:30.145202985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-56tns,Uid:5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.145666 kubelet[2815]: E0711 07:57:30.145630 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.146182 kubelet[2815]: E0711 07:57:30.146152 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" Jul 11 07:57:30.146457 kubelet[2815]: E0711 07:57:30.146322 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" Jul 11 07:57:30.146730 containerd[1555]: time="2025-07-11T07:57:30.146653206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcdps,Uid:5e2be15d-a767-4204-853f-1418d02bf473,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.147315 kubelet[2815]: E0711 07:57:30.147253 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f8586745b-56tns_calico-apiserver(5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f8586745b-56tns_calico-apiserver(5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40c979398af21328fa9495b0d32a6ebb520a56e46a3e89a706d0df2c9c9a9842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" podUID="5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b" Jul 11 07:57:30.149872 kubelet[2815]: E0711 07:57:30.148650 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.149872 kubelet[2815]: E0711 07:57:30.149471 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kcdps" Jul 11 07:57:30.149872 kubelet[2815]: E0711 07:57:30.149556 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kcdps" Jul 11 07:57:30.150474 kubelet[2815]: E0711 07:57:30.150228 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kcdps_kube-system(5e2be15d-a767-4204-853f-1418d02bf473)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kcdps_kube-system(5e2be15d-a767-4204-853f-1418d02bf473)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41f477a0807fc105402f9938e2002fc36f8298ebc515897cf60f3d6e1b48db95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kcdps" podUID="5e2be15d-a767-4204-853f-1418d02bf473" Jul 11 07:57:30.150939 containerd[1555]: time="2025-07-11T07:57:30.150811963Z" level=error msg="Failed to destroy network for sandbox \"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.152636 containerd[1555]: time="2025-07-11T07:57:30.152549620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bfvnk,Uid:e1ab0fc3-e967-489e-9eee-4deff0ad5164,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.153118 kubelet[2815]: E0711 07:57:30.153000 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.155284 kubelet[2815]: E0711 07:57:30.155240 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:30.155404 kubelet[2815]: E0711 07:57:30.155382 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-bfvnk" Jul 11 07:57:30.155540 kubelet[2815]: E0711 07:57:30.155511 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-bfvnk_calico-system(e1ab0fc3-e967-489e-9eee-4deff0ad5164)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-bfvnk_calico-system(e1ab0fc3-e967-489e-9eee-4deff0ad5164)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e9fda3de4d03b0eddc20de33684be41565ad5b440fb5908278dd4e49b365de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-bfvnk" podUID="e1ab0fc3-e967-489e-9eee-4deff0ad5164" Jul 11 07:57:30.177684 containerd[1555]: time="2025-07-11T07:57:30.177597754Z" level=error msg="Failed to destroy network for sandbox \"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.179694 containerd[1555]: time="2025-07-11T07:57:30.179649151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gpzk,Uid:71041969-ec0e-4843-8da4-8e08c776b239,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.181984 kubelet[2815]: E0711 07:57:30.179901 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.181984 kubelet[2815]: E0711 07:57:30.179972 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2gpzk" Jul 11 07:57:30.181984 kubelet[2815]: E0711 07:57:30.179996 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2gpzk" Jul 11 07:57:30.181984 kubelet[2815]: E0711 07:57:30.180043 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2gpzk_kube-system(71041969-ec0e-4843-8da4-8e08c776b239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2gpzk_kube-system(71041969-ec0e-4843-8da4-8e08c776b239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3a83f0cb53000f792c771c377ba06547382c580f3470434a59e307943897b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2gpzk" podUID="71041969-ec0e-4843-8da4-8e08c776b239" Jul 11 07:57:30.184104 containerd[1555]: time="2025-07-11T07:57:30.184015785Z" level=error msg="Failed to destroy network for sandbox \"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.185722 containerd[1555]: time="2025-07-11T07:57:30.185672240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55cb46c9d6-qjfxm,Uid:970c5f99-d983-4086-a6e8-ffa128e9fa8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.186224 kubelet[2815]: E0711 07:57:30.186056 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.186417 kubelet[2815]: E0711 07:57:30.186351 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55cb46c9d6-qjfxm" Jul 11 07:57:30.186417 kubelet[2815]: E0711 07:57:30.186383 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55cb46c9d6-qjfxm" Jul 11 07:57:30.186693 kubelet[2815]: E0711 07:57:30.186570 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55cb46c9d6-qjfxm_calico-system(970c5f99-d983-4086-a6e8-ffa128e9fa8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55cb46c9d6-qjfxm_calico-system(970c5f99-d983-4086-a6e8-ffa128e9fa8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f78c1c9b5b4f78de3f8377befad426890a4999261d750cf8e62c278ddf6ca76b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55cb46c9d6-qjfxm" podUID="970c5f99-d983-4086-a6e8-ffa128e9fa8a" Jul 11 07:57:30.408645 systemd[1]: Created slice kubepods-besteffort-pod13bef544_0afd_4c69_9e16_c9d26b0ce001.slice - libcontainer container kubepods-besteffort-pod13bef544_0afd_4c69_9e16_c9d26b0ce001.slice. Jul 11 07:57:30.419430 containerd[1555]: time="2025-07-11T07:57:30.419339394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6t7t,Uid:13bef544-0afd-4c69-9e16-c9d26b0ce001,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:30.526143 containerd[1555]: time="2025-07-11T07:57:30.524298013Z" level=error msg="Failed to destroy network for sandbox \"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.529164 containerd[1555]: time="2025-07-11T07:57:30.528908066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6t7t,Uid:13bef544-0afd-4c69-9e16-c9d26b0ce001,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.529564 kubelet[2815]: E0711 07:57:30.529516 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:30.530688 kubelet[2815]: E0711 07:57:30.530009 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:30.530688 kubelet[2815]: E0711 07:57:30.530044 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v6t7t" Jul 11 07:57:30.530672 systemd[1]: run-netns-cni\x2d105d3103\x2dd179\x2d7ffc\x2da796\x2d5541f4f8b59b.mount: Deactivated successfully. Jul 11 07:57:30.531571 kubelet[2815]: E0711 07:57:30.531038 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v6t7t_calico-system(13bef544-0afd-4c69-9e16-c9d26b0ce001)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v6t7t_calico-system(13bef544-0afd-4c69-9e16-c9d26b0ce001)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e3de1bc16bf4cf686fdefa0698f854f8d3ed15e1658bd06a58ee3318490363b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v6t7t" podUID="13bef544-0afd-4c69-9e16-c9d26b0ce001" Jul 11 07:57:42.158377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76181832.mount: Deactivated successfully. Jul 11 07:57:42.201110 containerd[1555]: time="2025-07-11T07:57:42.200835701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:42.203616 containerd[1555]: time="2025-07-11T07:57:42.203530021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 07:57:42.205456 containerd[1555]: time="2025-07-11T07:57:42.205376440Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:42.209037 containerd[1555]: time="2025-07-11T07:57:42.208956405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:42.210049 containerd[1555]: time="2025-07-11T07:57:42.209586587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.38759441s" Jul 11 07:57:42.210049 containerd[1555]: time="2025-07-11T07:57:42.209633279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 07:57:42.237246 containerd[1555]: time="2025-07-11T07:57:42.237160567Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 07:57:42.273682 containerd[1555]: time="2025-07-11T07:57:42.272283495Z" level=info msg="Container b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:42.294018 containerd[1555]: time="2025-07-11T07:57:42.293964431Z" level=info msg="CreateContainer within sandbox \"11b8cdabc108e09818ec6a8e0d36d39cb25834b1cca058f1c63a3de2fb8b981a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\"" Jul 11 07:57:42.295094 containerd[1555]: time="2025-07-11T07:57:42.295025733Z" level=info msg="StartContainer for \"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\"" Jul 11 07:57:42.299091 containerd[1555]: time="2025-07-11T07:57:42.298979655Z" level=info msg="connecting to shim b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc" address="unix:///run/containerd/s/abadb2d6f685f51d9a4b1f761a64691ab1d91ea54fc49dfd4581b6c1b5b9eff3" protocol=ttrpc version=3 Jul 11 07:57:42.399700 containerd[1555]: time="2025-07-11T07:57:42.398480663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:57:42.419635 systemd[1]: Started cri-containerd-b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc.scope - libcontainer container b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc. Jul 11 07:57:42.545113 containerd[1555]: time="2025-07-11T07:57:42.545045430Z" level=info msg="StartContainer for \"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" returns successfully" Jul 11 07:57:42.607413 containerd[1555]: time="2025-07-11T07:57:42.607332801Z" level=error msg="Failed to destroy network for sandbox \"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:42.610129 containerd[1555]: time="2025-07-11T07:57:42.610000888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:42.611821 kubelet[2815]: E0711 07:57:42.610926 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:57:42.611821 kubelet[2815]: E0711 07:57:42.611266 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:42.611821 kubelet[2815]: E0711 07:57:42.611368 2815 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" Jul 11 07:57:42.615597 kubelet[2815]: E0711 07:57:42.615218 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f8586745b-b7nf2_calico-apiserver(5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f8586745b-b7nf2_calico-apiserver(5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c4e620b2ba6222d76519909e7030871042a0085dae0040e02aea0960ca67ccc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" podUID="5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212" Jul 11 07:57:42.704392 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 07:57:42.704585 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 07:57:42.907158 kubelet[2815]: I0711 07:57:42.906743 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-952qm" podStartSLOduration=2.07651231 podStartE2EDuration="33.906629699s" podCreationTimestamp="2025-07-11 07:57:09 +0000 UTC" firstStartedPulling="2025-07-11 07:57:10.381060274 +0000 UTC m=+21.221471109" lastFinishedPulling="2025-07-11 07:57:42.211177663 +0000 UTC m=+53.051588498" observedRunningTime="2025-07-11 07:57:42.902534929 +0000 UTC m=+53.742945785" watchObservedRunningTime="2025-07-11 07:57:42.906629699 +0000 UTC m=+53.747040534" Jul 11 07:57:43.197681 kubelet[2815]: I0711 07:57:43.197611 2815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-backend-key-pair\") pod \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " Jul 11 07:57:43.197854 kubelet[2815]: I0711 07:57:43.197697 2815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqm8b\" (UniqueName: \"kubernetes.io/projected/970c5f99-d983-4086-a6e8-ffa128e9fa8a-kube-api-access-sqm8b\") pod \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " Jul 11 07:57:43.197854 kubelet[2815]: I0711 07:57:43.197743 2815 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-ca-bundle\") pod \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\" (UID: \"970c5f99-d983-4086-a6e8-ffa128e9fa8a\") " Jul 11 07:57:43.201058 kubelet[2815]: I0711 07:57:43.200236 2815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "970c5f99-d983-4086-a6e8-ffa128e9fa8a" (UID: "970c5f99-d983-4086-a6e8-ffa128e9fa8a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 07:57:43.209681 kubelet[2815]: I0711 07:57:43.209574 2815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "970c5f99-d983-4086-a6e8-ffa128e9fa8a" (UID: "970c5f99-d983-4086-a6e8-ffa128e9fa8a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 07:57:43.211156 systemd[1]: var-lib-kubelet-pods-970c5f99\x2dd983\x2d4086\x2da6e8\x2dffa128e9fa8a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 07:57:43.216157 kubelet[2815]: I0711 07:57:43.216055 2815 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970c5f99-d983-4086-a6e8-ffa128e9fa8a-kube-api-access-sqm8b" (OuterVolumeSpecName: "kube-api-access-sqm8b") pod "970c5f99-d983-4086-a6e8-ffa128e9fa8a" (UID: "970c5f99-d983-4086-a6e8-ffa128e9fa8a"). InnerVolumeSpecName "kube-api-access-sqm8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 07:57:43.217557 systemd[1]: var-lib-kubelet-pods-970c5f99\x2dd983\x2d4086\x2da6e8\x2dffa128e9fa8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqm8b.mount: Deactivated successfully. Jul 11 07:57:43.250318 containerd[1555]: time="2025-07-11T07:57:43.250256279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"fc633f91244da1fc697b00309975d7104652abf93a4230315410daa4a60bec11\" pid:3918 exit_status:1 exited_at:{seconds:1752220663 nanos:249392087}" Jul 11 07:57:43.298629 kubelet[2815]: I0711 07:57:43.298555 2815 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-backend-key-pair\") on node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" DevicePath \"\"" Jul 11 07:57:43.298629 kubelet[2815]: I0711 07:57:43.298604 2815 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqm8b\" (UniqueName: \"kubernetes.io/projected/970c5f99-d983-4086-a6e8-ffa128e9fa8a-kube-api-access-sqm8b\") on node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" DevicePath \"\"" Jul 11 07:57:43.298629 kubelet[2815]: I0711 07:57:43.298617 2815 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970c5f99-d983-4086-a6e8-ffa128e9fa8a-whisker-ca-bundle\") on node \"ci-4392-0-0-n-91c7dbf1fc.novalocal\" DevicePath \"\"" Jul 11 07:57:43.394253 containerd[1555]: time="2025-07-11T07:57:43.393655516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6t7t,Uid:13bef544-0afd-4c69-9e16-c9d26b0ce001,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:43.394767 containerd[1555]: time="2025-07-11T07:57:43.394678579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcdps,Uid:5e2be15d-a767-4204-853f-1418d02bf473,Namespace:kube-system,Attempt:0,}" Jul 11 07:57:43.426488 systemd[1]: Removed slice kubepods-besteffort-pod970c5f99_d983_4086_a6e8_ffa128e9fa8a.slice - libcontainer container kubepods-besteffort-pod970c5f99_d983_4086_a6e8_ffa128e9fa8a.slice. Jul 11 07:57:43.844892 systemd-networkd[1448]: cali4b4b13026d1: Link UP Jul 11 07:57:43.846568 systemd-networkd[1448]: cali4b4b13026d1: Gained carrier Jul 11 07:57:43.905103 containerd[1555]: 2025-07-11 07:57:43.476 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:43.905103 containerd[1555]: 2025-07-11 07:57:43.583 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0 csi-node-driver- calico-system 13bef544-0afd-4c69-9e16-c9d26b0ce001 692 0 2025-07-11 07:57:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal csi-node-driver-v6t7t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b4b13026d1 [] [] }} ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-" Jul 11 07:57:43.905103 containerd[1555]: 2025-07-11 07:57:43.584 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.905103 containerd[1555]: 2025-07-11 07:57:43.707 [INFO][3973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" HandleID="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.707 [INFO][3973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" HandleID="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003219d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"csi-node-driver-v6t7t", "timestamp":"2025-07-11 07:57:43.707057964 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.708 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.709 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.709 [INFO][3973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.754 [INFO][3973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.767 [INFO][3973] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.780 [INFO][3973] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.789 [INFO][3973] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905562 containerd[1555]: 2025-07-11 07:57:43.793 [INFO][3973] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.793 [INFO][3973] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.796 [INFO][3973] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050 Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.808 [INFO][3973] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.817 [INFO][3973] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.193/26] block=192.168.85.192/26 handle="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.818 [INFO][3973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.193/26] handle="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.818 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:43.905972 containerd[1555]: 2025-07-11 07:57:43.819 [INFO][3973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.193/26] IPv6=[] ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" HandleID="k8s-pod-network.ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.908263 containerd[1555]: 2025-07-11 07:57:43.827 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13bef544-0afd-4c69-9e16-c9d26b0ce001", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"csi-node-driver-v6t7t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4b13026d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:43.908377 containerd[1555]: 2025-07-11 07:57:43.827 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.193/32] ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.908377 containerd[1555]: 2025-07-11 07:57:43.827 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b4b13026d1 ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.908377 containerd[1555]: 2025-07-11 07:57:43.847 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.908579 containerd[1555]: 2025-07-11 07:57:43.851 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13bef544-0afd-4c69-9e16-c9d26b0ce001", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050", Pod:"csi-node-driver-v6t7t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b4b13026d1", MAC:"6e:30:90:32:fe:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:43.908692 containerd[1555]: 2025-07-11 07:57:43.899 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" Namespace="calico-system" Pod="csi-node-driver-v6t7t" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-csi--node--driver--v6t7t-eth0" Jul 11 07:57:43.961633 systemd-networkd[1448]: cali21fdedc59f2: Link UP Jul 11 07:57:43.964036 systemd-networkd[1448]: cali21fdedc59f2: Gained carrier Jul 11 07:57:44.001998 containerd[1555]: 2025-07-11 07:57:43.520 [INFO][3951] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:44.001998 containerd[1555]: 2025-07-11 07:57:43.582 [INFO][3951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0 coredns-7c65d6cfc9- kube-system 5e2be15d-a767-4204-853f-1418d02bf473 821 0 2025-07-11 07:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal coredns-7c65d6cfc9-kcdps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21fdedc59f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-" Jul 11 07:57:44.001998 containerd[1555]: 2025-07-11 07:57:43.582 [INFO][3951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.001998 containerd[1555]: 2025-07-11 07:57:43.750 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" HandleID="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.751 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" HandleID="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"coredns-7c65d6cfc9-kcdps", "timestamp":"2025-07-11 07:57:43.750300177 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.751 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.818 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.818 [INFO][3975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.855 [INFO][3975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.875 [INFO][3975] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.889 [INFO][3975] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.904 [INFO][3975] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002497 containerd[1555]: 2025-07-11 07:57:43.913 [INFO][3975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.914 [INFO][3975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.918 [INFO][3975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38 Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.932 [INFO][3975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.941 [INFO][3975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.194/26] block=192.168.85.192/26 handle="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.944 [INFO][3975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.194/26] handle="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.944 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:44.002782 containerd[1555]: 2025-07-11 07:57:43.945 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.194/26] IPv6=[] ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" HandleID="k8s-pod-network.c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.951 [INFO][3951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e2be15d-a767-4204-853f-1418d02bf473", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-kcdps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21fdedc59f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.952 [INFO][3951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.194/32] ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.952 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21fdedc59f2 ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.967 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.967 [INFO][3951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5e2be15d-a767-4204-853f-1418d02bf473", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38", Pod:"coredns-7c65d6cfc9-kcdps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21fdedc59f2", MAC:"fe:f2:f6:32:0b:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:44.005881 containerd[1555]: 2025-07-11 07:57:43.995 [INFO][3951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kcdps" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--kcdps-eth0" Jul 11 07:57:44.039931 containerd[1555]: time="2025-07-11T07:57:44.039813392Z" level=info msg="connecting to shim ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050" address="unix:///run/containerd/s/2395c68d1b80f39d3a6d501ce3c03a3337afa5aceb67a94b401f5c803939ca4f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:44.091857 systemd[1]: Created slice kubepods-besteffort-podd5be36e8_b388_412f_aa6a_b5cc97f2edb3.slice - libcontainer container kubepods-besteffort-podd5be36e8_b388_412f_aa6a_b5cc97f2edb3.slice. Jul 11 07:57:44.115663 containerd[1555]: time="2025-07-11T07:57:44.115353927Z" level=info msg="connecting to shim c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38" address="unix:///run/containerd/s/00049385fe4ae65ef6c0e137182ab98a53a5a44892907d0b54204d1ece308533" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:44.148390 systemd[1]: Started cri-containerd-ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050.scope - libcontainer container ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050. Jul 11 07:57:44.190297 systemd[1]: Started cri-containerd-c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38.scope - libcontainer container c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38. Jul 11 07:57:44.209917 kubelet[2815]: I0711 07:57:44.209814 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5be36e8-b388-412f-aa6a-b5cc97f2edb3-whisker-backend-key-pair\") pod \"whisker-78b896cb4b-hc965\" (UID: \"d5be36e8-b388-412f-aa6a-b5cc97f2edb3\") " pod="calico-system/whisker-78b896cb4b-hc965" Jul 11 07:57:44.209917 kubelet[2815]: I0711 07:57:44.209916 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5be36e8-b388-412f-aa6a-b5cc97f2edb3-whisker-ca-bundle\") pod \"whisker-78b896cb4b-hc965\" (UID: \"d5be36e8-b388-412f-aa6a-b5cc97f2edb3\") " pod="calico-system/whisker-78b896cb4b-hc965" Jul 11 07:57:44.210693 kubelet[2815]: I0711 07:57:44.209952 2815 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6cn8\" (UniqueName: \"kubernetes.io/projected/d5be36e8-b388-412f-aa6a-b5cc97f2edb3-kube-api-access-l6cn8\") pod \"whisker-78b896cb4b-hc965\" (UID: \"d5be36e8-b388-412f-aa6a-b5cc97f2edb3\") " pod="calico-system/whisker-78b896cb4b-hc965" Jul 11 07:57:44.249019 containerd[1555]: time="2025-07-11T07:57:44.248921346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v6t7t,Uid:13bef544-0afd-4c69-9e16-c9d26b0ce001,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050\"" Jul 11 07:57:44.254697 containerd[1555]: time="2025-07-11T07:57:44.254567075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 07:57:44.280904 containerd[1555]: time="2025-07-11T07:57:44.280747582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"7b39d115a0db6e52ff4e7cb1020593b6d6837fdb50c152a04584f6c316a8ab60\" pid:4007 exit_status:1 exited_at:{seconds:1752220664 nanos:279666849}" Jul 11 07:57:44.337001 containerd[1555]: time="2025-07-11T07:57:44.336946704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kcdps,Uid:5e2be15d-a767-4204-853f-1418d02bf473,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38\"" Jul 11 07:57:44.345128 containerd[1555]: time="2025-07-11T07:57:44.344896022Z" level=info msg="CreateContainer within sandbox \"c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 07:57:44.362762 containerd[1555]: time="2025-07-11T07:57:44.362704862Z" level=info msg="Container ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:44.376421 containerd[1555]: time="2025-07-11T07:57:44.376116728Z" level=info msg="CreateContainer within sandbox \"c0a64d1048bfde167bf3a28578e9a121e8fb1d23858280d0026ba139f626bb38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d\"" Jul 11 07:57:44.378050 containerd[1555]: time="2025-07-11T07:57:44.378015421Z" level=info msg="StartContainer for \"ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d\"" Jul 11 07:57:44.380649 containerd[1555]: time="2025-07-11T07:57:44.380606064Z" level=info msg="connecting to shim ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d" address="unix:///run/containerd/s/00049385fe4ae65ef6c0e137182ab98a53a5a44892907d0b54204d1ece308533" protocol=ttrpc version=3 Jul 11 07:57:44.394125 containerd[1555]: time="2025-07-11T07:57:44.393829359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f595d99cc-dlqmb,Uid:f5b380d0-0a39-4f70-a5a6-a5522aba6ee1,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:44.394125 containerd[1555]: time="2025-07-11T07:57:44.394001819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gpzk,Uid:71041969-ec0e-4843-8da4-8e08c776b239,Namespace:kube-system,Attempt:0,}" Jul 11 07:57:44.409386 systemd[1]: Started cri-containerd-ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d.scope - libcontainer container ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d. Jul 11 07:57:44.412269 containerd[1555]: time="2025-07-11T07:57:44.412210365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b896cb4b-hc965,Uid:d5be36e8-b388-412f-aa6a-b5cc97f2edb3,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:44.539506 containerd[1555]: time="2025-07-11T07:57:44.539401458Z" level=info msg="StartContainer for \"ae0ec7c6bd702a9ead50271e9942b538624893719070d2a7b8d91531efb0a17d\" returns successfully" Jul 11 07:57:44.764183 systemd-networkd[1448]: cali45b3b8e7784: Link UP Jul 11 07:57:44.765911 systemd-networkd[1448]: cali45b3b8e7784: Gained carrier Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.474 [INFO][4140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.517 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0 calico-kube-controllers-6f595d99cc- calico-system f5b380d0-0a39-4f70-a5a6-a5522aba6ee1 825 0 2025-07-11 07:57:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f595d99cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal calico-kube-controllers-6f595d99cc-dlqmb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45b3b8e7784 [] [] }} ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.522 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.611 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" HandleID="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.613 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" HandleID="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5540), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"calico-kube-controllers-6f595d99cc-dlqmb", "timestamp":"2025-07-11 07:57:44.611728054 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.613 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.613 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.615 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.640 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.666 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.691 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.705 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.718 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.719 [INFO][4191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.723 [INFO][4191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.729 [INFO][4191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.195/26] block=192.168.85.192/26 handle="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.195/26] handle="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:44.798608 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.195/26] IPv6=[] ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" HandleID="k8s-pod-network.f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.752 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0", GenerateName:"calico-kube-controllers-6f595d99cc-", Namespace:"calico-system", SelfLink:"", UID:"f5b380d0-0a39-4f70-a5a6-a5522aba6ee1", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f595d99cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6f595d99cc-dlqmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45b3b8e7784", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.753 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.195/32] ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.753 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45b3b8e7784 ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.764 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.766 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0", GenerateName:"calico-kube-controllers-6f595d99cc-", Namespace:"calico-system", SelfLink:"", UID:"f5b380d0-0a39-4f70-a5a6-a5522aba6ee1", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f595d99cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a", Pod:"calico-kube-controllers-6f595d99cc-dlqmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45b3b8e7784", MAC:"c6:06:64:e5:0f:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:44.800792 containerd[1555]: 2025-07-11 07:57:44.794 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" Namespace="calico-system" Pod="calico-kube-controllers-6f595d99cc-dlqmb" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--kube--controllers--6f595d99cc--dlqmb-eth0" Jul 11 07:57:44.861037 containerd[1555]: time="2025-07-11T07:57:44.860961189Z" level=info msg="connecting to shim f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a" address="unix:///run/containerd/s/709b6f14eed6b30ad49be06861a9a75b1be4ad58a799e213736cce6c61000529" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:44.960288 systemd[1]: Started cri-containerd-f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a.scope - libcontainer container f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a. Jul 11 07:57:44.978303 systemd-networkd[1448]: cali0fd8e29d8d6: Link UP Jul 11 07:57:44.979650 systemd-networkd[1448]: cali0fd8e29d8d6: Gained carrier Jul 11 07:57:45.041995 kubelet[2815]: I0711 07:57:45.041290 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kcdps" podStartSLOduration=51.041213503 podStartE2EDuration="51.041213503s" podCreationTimestamp="2025-07-11 07:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:57:45.039519898 +0000 UTC m=+55.879930763" watchObservedRunningTime="2025-07-11 07:57:45.041213503 +0000 UTC m=+55.881624368" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.475 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.525 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0 coredns-7c65d6cfc9- kube-system 71041969-ec0e-4843-8da4-8e08c776b239 828 0 2025-07-11 07:56:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal coredns-7c65d6cfc9-2gpzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fd8e29d8d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.525 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.664 [INFO][4197] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" HandleID="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.664 [INFO][4197] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" HandleID="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"coredns-7c65d6cfc9-2gpzk", "timestamp":"2025-07-11 07:57:44.66360592 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.664 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.747 [INFO][4197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.779 [INFO][4197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.803 [INFO][4197] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.827 [INFO][4197] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.833 [INFO][4197] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.842 [INFO][4197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.842 [INFO][4197] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.850 [INFO][4197] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5 Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.862 [INFO][4197] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.943 [INFO][4197] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.196/26] block=192.168.85.192/26 handle="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.944 [INFO][4197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.196/26] handle="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.945 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:45.070163 containerd[1555]: 2025-07-11 07:57:44.947 [INFO][4197] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.196/26] IPv6=[] ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" HandleID="k8s-pod-network.83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:44.966 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"71041969-ec0e-4843-8da4-8e08c776b239", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-2gpzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd8e29d8d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:44.966 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.196/32] ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:44.966 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fd8e29d8d6 ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:44.981 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:44.982 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"71041969-ec0e-4843-8da4-8e08c776b239", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5", Pod:"coredns-7c65d6cfc9-2gpzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd8e29d8d6", MAC:"7e:cb:e6:b9:80:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:45.071999 containerd[1555]: 2025-07-11 07:57:45.046 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2gpzk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-coredns--7c65d6cfc9--2gpzk-eth0" Jul 11 07:57:45.182694 containerd[1555]: time="2025-07-11T07:57:45.182353144Z" level=info msg="connecting to shim 83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5" address="unix:///run/containerd/s/036829d0ee921ce07beefc53cec2843e69045a81e07f57777e2754e102265647" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:45.204716 systemd-networkd[1448]: cali9dbe0d68c52: Link UP Jul 11 07:57:45.207135 systemd-networkd[1448]: cali9dbe0d68c52: Gained carrier Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.562 [INFO][4158] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.614 [INFO][4158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0 whisker-78b896cb4b- calico-system d5be36e8-b388-412f-aa6a-b5cc97f2edb3 916 0 2025-07-11 07:57:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78b896cb4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal whisker-78b896cb4b-hc965 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9dbe0d68c52 [] [] }} ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.614 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.725 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" HandleID="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.726 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" HandleID="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"whisker-78b896cb4b-hc965", "timestamp":"2025-07-11 07:57:44.725767764 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.726 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.945 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:44.945 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.046 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.081 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.110 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.118 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.141 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.141 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.148 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8 Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.169 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.186 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.197/26] block=192.168.85.192/26 handle="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.188 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.197/26] handle="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.189 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:45.243217 containerd[1555]: 2025-07-11 07:57:45.189 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.197/26] IPv6=[] ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" HandleID="k8s-pod-network.60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.195 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0", GenerateName:"whisker-78b896cb4b-", Namespace:"calico-system", SelfLink:"", UID:"d5be36e8-b388-412f-aa6a-b5cc97f2edb3", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b896cb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"whisker-78b896cb4b-hc965", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9dbe0d68c52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.195 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.197/32] ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.195 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dbe0d68c52 ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.206 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.207 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0", GenerateName:"whisker-78b896cb4b-", Namespace:"calico-system", SelfLink:"", UID:"d5be36e8-b388-412f-aa6a-b5cc97f2edb3", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b896cb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8", Pod:"whisker-78b896cb4b-hc965", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9dbe0d68c52", MAC:"3a:37:03:c8:73:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:45.244058 containerd[1555]: 2025-07-11 07:57:45.232 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" Namespace="calico-system" Pod="whisker-78b896cb4b-hc965" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-whisker--78b896cb4b--hc965-eth0" Jul 11 07:57:45.264255 systemd-networkd[1448]: cali4b4b13026d1: Gained IPv6LL Jul 11 07:57:45.271710 systemd[1]: Started cri-containerd-83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5.scope - libcontainer container 83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5. Jul 11 07:57:45.328237 containerd[1555]: time="2025-07-11T07:57:45.328016608Z" level=info msg="connecting to shim 60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8" address="unix:///run/containerd/s/3575e69fb55adcf970fa02847d178d300eedaecc36c084576739604e690cf994" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:45.395048 containerd[1555]: time="2025-07-11T07:57:45.394396160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-56tns,Uid:5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:57:45.397349 containerd[1555]: time="2025-07-11T07:57:45.397305933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bfvnk,Uid:e1ab0fc3-e967-489e-9eee-4deff0ad5164,Namespace:calico-system,Attempt:0,}" Jul 11 07:57:45.399317 systemd[1]: Started cri-containerd-60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8.scope - libcontainer container 60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8. Jul 11 07:57:45.408383 kubelet[2815]: I0711 07:57:45.408330 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970c5f99-d983-4086-a6e8-ffa128e9fa8a" path="/var/lib/kubelet/pods/970c5f99-d983-4086-a6e8-ffa128e9fa8a/volumes" Jul 11 07:57:45.684888 containerd[1555]: time="2025-07-11T07:57:45.684685128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f595d99cc-dlqmb,Uid:f5b380d0-0a39-4f70-a5a6-a5522aba6ee1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a\"" Jul 11 07:57:45.812952 containerd[1555]: time="2025-07-11T07:57:45.812844130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2gpzk,Uid:71041969-ec0e-4843-8da4-8e08c776b239,Namespace:kube-system,Attempt:0,} returns sandbox id \"83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5\"" Jul 11 07:57:45.817507 containerd[1555]: time="2025-07-11T07:57:45.817430456Z" level=info msg="CreateContainer within sandbox \"83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 07:57:45.838284 systemd-networkd[1448]: cali21fdedc59f2: Gained IPv6LL Jul 11 07:57:45.951541 containerd[1555]: time="2025-07-11T07:57:45.951432401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b896cb4b-hc965,Uid:d5be36e8-b388-412f-aa6a-b5cc97f2edb3,Namespace:calico-system,Attempt:0,} returns sandbox id \"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8\"" Jul 11 07:57:46.030120 containerd[1555]: time="2025-07-11T07:57:46.029700844Z" level=info msg="Container 9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:46.050444 containerd[1555]: time="2025-07-11T07:57:46.050335055Z" level=info msg="CreateContainer within sandbox \"83397aaa3b8068d4bdb941a7ab4f17ee0f5c0c5f9f7e65de4e8efe894a01e1c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83\"" Jul 11 07:57:46.063192 containerd[1555]: time="2025-07-11T07:57:46.062962392Z" level=info msg="StartContainer for \"9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83\"" Jul 11 07:57:46.066062 containerd[1555]: time="2025-07-11T07:57:46.065930013Z" level=info msg="connecting to shim 9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83" address="unix:///run/containerd/s/036829d0ee921ce07beefc53cec2843e69045a81e07f57777e2754e102265647" protocol=ttrpc version=3 Jul 11 07:57:46.130387 systemd[1]: Started cri-containerd-9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83.scope - libcontainer container 9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83. Jul 11 07:57:46.269356 containerd[1555]: time="2025-07-11T07:57:46.269172045Z" level=info msg="StartContainer for \"9719b87928b1bdac5a0c912fe46d23e8c73cff143ccafe80f268c11884472d83\" returns successfully" Jul 11 07:57:46.352212 systemd-networkd[1448]: cali9dbe0d68c52: Gained IPv6LL Jul 11 07:57:46.372956 systemd-networkd[1448]: cali6e9678702d6: Link UP Jul 11 07:57:46.375523 systemd-networkd[1448]: cali6e9678702d6: Gained carrier Jul 11 07:57:46.414339 systemd-networkd[1448]: cali0fd8e29d8d6: Gained IPv6LL Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.045 [INFO][4485] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.118 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0 calico-apiserver-5f8586745b- calico-apiserver 5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b 829 0 2025-07-11 07:57:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f8586745b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal calico-apiserver-5f8586745b-56tns eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e9678702d6 [] [] }} ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.119 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.245 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" HandleID="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.247 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" HandleID="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003924e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"calico-apiserver-5f8586745b-56tns", "timestamp":"2025-07-11 07:57:46.245014734 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.251 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.252 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.252 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.278 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.287 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.296 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.300 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.314 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.314 [INFO][4519] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.319 [INFO][4519] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78 Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.326 [INFO][4519] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.343 [INFO][4519] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.198/26] block=192.168.85.192/26 handle="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.343 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.198/26] handle="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.343 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:46.426553 containerd[1555]: 2025-07-11 07:57:46.344 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.198/26] IPv6=[] ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" HandleID="k8s-pod-network.1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.348 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0", GenerateName:"calico-apiserver-5f8586745b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f8586745b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"calico-apiserver-5f8586745b-56tns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e9678702d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.353 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.198/32] ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.354 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e9678702d6 ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.377 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.380 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0", GenerateName:"calico-apiserver-5f8586745b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f8586745b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78", Pod:"calico-apiserver-5f8586745b-56tns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e9678702d6", MAC:"12:80:6b:36:5f:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:46.431211 containerd[1555]: 2025-07-11 07:57:46.422 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-56tns" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--56tns-eth0" Jul 11 07:57:46.525514 containerd[1555]: time="2025-07-11T07:57:46.525288001Z" level=info msg="connecting to shim 1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78" address="unix:///run/containerd/s/75941547e77e01f5818c3b875c22b097ecc6899c9b89c9ee7a38e76b3f5243d3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:46.592730 systemd[1]: Started cri-containerd-1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78.scope - libcontainer container 1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78. Jul 11 07:57:46.607266 systemd-networkd[1448]: cali45b3b8e7784: Gained IPv6LL Jul 11 07:57:46.772891 containerd[1555]: time="2025-07-11T07:57:46.772764894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-56tns,Uid:5b2c28a6-7bbc-442a-bf7b-b9d5fc3dad7b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78\"" Jul 11 07:57:46.790467 systemd-networkd[1448]: caliaf19ae5fe48: Link UP Jul 11 07:57:46.794040 systemd-networkd[1448]: caliaf19ae5fe48: Gained carrier Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.573 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0 goldmane-58fd7646b9- calico-system e1ab0fc3-e967-489e-9eee-4deff0ad5164 830 0 2025-07-11 07:57:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal goldmane-58fd7646b9-bfvnk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaf19ae5fe48 [] [] }} ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.574 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.660 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" HandleID="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.660 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" HandleID="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f520), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"goldmane-58fd7646b9-bfvnk", "timestamp":"2025-07-11 07:57:46.660737778 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.661 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.661 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.665 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.681 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.693 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.716 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.724 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.731 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.731 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.734 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51 Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.751 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.765 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.199/26] block=192.168.85.192/26 handle="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.767 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.199/26] handle="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.767 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:46.825629 containerd[1555]: 2025-07-11 07:57:46.767 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.199/26] IPv6=[] ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" HandleID="k8s-pod-network.4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.774 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e1ab0fc3-e967-489e-9eee-4deff0ad5164", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"goldmane-58fd7646b9-bfvnk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf19ae5fe48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.774 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.199/32] ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.774 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf19ae5fe48 ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.797 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.799 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e1ab0fc3-e967-489e-9eee-4deff0ad5164", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51", Pod:"goldmane-58fd7646b9-bfvnk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf19ae5fe48", MAC:"6a:9f:a5:55:85:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:46.827144 containerd[1555]: 2025-07-11 07:57:46.813 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" Namespace="calico-system" Pod="goldmane-58fd7646b9-bfvnk" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-goldmane--58fd7646b9--bfvnk-eth0" Jul 11 07:57:46.892848 containerd[1555]: time="2025-07-11T07:57:46.892710380Z" level=info msg="connecting to shim 4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51" address="unix:///run/containerd/s/6b4f12b125b92d2d026e8bfade6968e2f2fae59786fdab7a5728621ead37edc3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:46.938246 systemd-networkd[1448]: vxlan.calico: Link UP Jul 11 07:57:46.938473 systemd-networkd[1448]: vxlan.calico: Gained carrier Jul 11 07:57:46.990366 systemd[1]: Started cri-containerd-4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51.scope - libcontainer container 4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51. Jul 11 07:57:47.052508 kubelet[2815]: I0711 07:57:47.051495 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2gpzk" podStartSLOduration=53.051473524 podStartE2EDuration="53.051473524s" podCreationTimestamp="2025-07-11 07:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:57:47.017058318 +0000 UTC m=+57.857469193" watchObservedRunningTime="2025-07-11 07:57:47.051473524 +0000 UTC m=+57.891884359" Jul 11 07:57:47.284767 containerd[1555]: time="2025-07-11T07:57:47.284618272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-bfvnk,Uid:e1ab0fc3-e967-489e-9eee-4deff0ad5164,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51\"" Jul 11 07:57:47.630453 systemd-networkd[1448]: cali6e9678702d6: Gained IPv6LL Jul 11 07:57:47.765667 containerd[1555]: time="2025-07-11T07:57:47.765609434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:47.771437 containerd[1555]: time="2025-07-11T07:57:47.771321314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 07:57:47.776125 containerd[1555]: time="2025-07-11T07:57:47.775043817Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:47.779294 containerd[1555]: time="2025-07-11T07:57:47.779265428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:47.781282 containerd[1555]: time="2025-07-11T07:57:47.781250929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.526644075s" Jul 11 07:57:47.781474 containerd[1555]: time="2025-07-11T07:57:47.781438166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 07:57:47.784300 containerd[1555]: time="2025-07-11T07:57:47.784275845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 07:57:47.787638 containerd[1555]: time="2025-07-11T07:57:47.787497647Z" level=info msg="CreateContainer within sandbox \"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 07:57:47.807383 containerd[1555]: time="2025-07-11T07:57:47.807332978Z" level=info msg="Container b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:47.823829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062208621.mount: Deactivated successfully. Jul 11 07:57:47.838418 containerd[1555]: time="2025-07-11T07:57:47.838283307Z" level=info msg="CreateContainer within sandbox \"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47\"" Jul 11 07:57:47.839309 containerd[1555]: time="2025-07-11T07:57:47.839283867Z" level=info msg="StartContainer for \"b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47\"" Jul 11 07:57:47.842236 containerd[1555]: time="2025-07-11T07:57:47.842158209Z" level=info msg="connecting to shim b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47" address="unix:///run/containerd/s/2395c68d1b80f39d3a6d501ce3c03a3337afa5aceb67a94b401f5c803939ca4f" protocol=ttrpc version=3 Jul 11 07:57:47.910503 systemd[1]: Started cri-containerd-b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47.scope - libcontainer container b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47. Jul 11 07:57:47.988379 containerd[1555]: time="2025-07-11T07:57:47.988241295Z" level=info msg="StartContainer for \"b32bfb68a3dd84e01045c5bc0bc52440c6c402d7978c9d0dc122761ca8e89c47\" returns successfully" Jul 11 07:57:48.142659 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Jul 11 07:57:48.590561 systemd-networkd[1448]: caliaf19ae5fe48: Gained IPv6LL Jul 11 07:57:51.445112 containerd[1555]: time="2025-07-11T07:57:51.444957804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"c0f39f33ccb4e7b35c0d18865add4bf65744f7faaee2a28264e37c6b58ecfec5\" pid:4807 exited_at:{seconds:1752220671 nanos:441822306}" Jul 11 07:57:52.810677 containerd[1555]: time="2025-07-11T07:57:52.810593723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:52.812881 containerd[1555]: time="2025-07-11T07:57:52.812836294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 07:57:52.814107 containerd[1555]: time="2025-07-11T07:57:52.814051764Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:52.817930 containerd[1555]: time="2025-07-11T07:57:52.817793368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:52.818774 containerd[1555]: time="2025-07-11T07:57:52.818416583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.033782136s" Jul 11 07:57:52.818774 containerd[1555]: time="2025-07-11T07:57:52.818465678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 07:57:52.822125 containerd[1555]: time="2025-07-11T07:57:52.821542246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 07:57:52.844248 containerd[1555]: time="2025-07-11T07:57:52.844145098Z" level=info msg="CreateContainer within sandbox \"f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 07:57:52.870565 containerd[1555]: time="2025-07-11T07:57:52.870512648Z" level=info msg="Container 9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:52.879301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858160458.mount: Deactivated successfully. Jul 11 07:57:52.905104 containerd[1555]: time="2025-07-11T07:57:52.904909531Z" level=info msg="CreateContainer within sandbox \"f54732f47d3a97c6e152a25e4ec0ae0c98b6ae1f477414cd1ddb95bdaf2a8e7a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\"" Jul 11 07:57:52.907560 containerd[1555]: time="2025-07-11T07:57:52.906158987Z" level=info msg="StartContainer for \"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\"" Jul 11 07:57:52.909574 containerd[1555]: time="2025-07-11T07:57:52.909360037Z" level=info msg="connecting to shim 9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda" address="unix:///run/containerd/s/709b6f14eed6b30ad49be06861a9a75b1be4ad58a799e213736cce6c61000529" protocol=ttrpc version=3 Jul 11 07:57:52.952334 systemd[1]: Started cri-containerd-9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda.scope - libcontainer container 9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda. Jul 11 07:57:53.037901 containerd[1555]: time="2025-07-11T07:57:53.037846830Z" level=info msg="StartContainer for \"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" returns successfully" Jul 11 07:57:54.083725 kubelet[2815]: I0711 07:57:54.082983 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f595d99cc-dlqmb" podStartSLOduration=36.951248951 podStartE2EDuration="44.082923485s" podCreationTimestamp="2025-07-11 07:57:10 +0000 UTC" firstStartedPulling="2025-07-11 07:57:45.688967938 +0000 UTC m=+56.529378773" lastFinishedPulling="2025-07-11 07:57:52.820642462 +0000 UTC m=+63.661053307" observedRunningTime="2025-07-11 07:57:54.078775353 +0000 UTC m=+64.919186248" watchObservedRunningTime="2025-07-11 07:57:54.082923485 +0000 UTC m=+64.923334370" Jul 11 07:57:55.155588 containerd[1555]: time="2025-07-11T07:57:55.155518875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"73a4fd97ddd02930b0a8856c9a3504bbdba743ac0a17e0e416c660afc77e6b76\" pid:4887 exited_at:{seconds:1752220675 nanos:154644225}" Jul 11 07:57:55.210111 containerd[1555]: time="2025-07-11T07:57:55.209802634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:55.213130 containerd[1555]: time="2025-07-11T07:57:55.213101037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 07:57:55.219159 containerd[1555]: time="2025-07-11T07:57:55.219113026Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:55.224287 containerd[1555]: time="2025-07-11T07:57:55.224175207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:57:55.227945 containerd[1555]: time="2025-07-11T07:57:55.227609354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.405328007s" Jul 11 07:57:55.227945 containerd[1555]: time="2025-07-11T07:57:55.227652167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 07:57:55.231899 containerd[1555]: time="2025-07-11T07:57:55.229755847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 07:57:55.234194 containerd[1555]: time="2025-07-11T07:57:55.234037271Z" level=info msg="CreateContainer within sandbox \"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 07:57:55.258346 containerd[1555]: time="2025-07-11T07:57:55.258293090Z" level=info msg="Container 8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:57:55.289567 containerd[1555]: time="2025-07-11T07:57:55.289517186Z" level=info msg="CreateContainer within sandbox \"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae\"" Jul 11 07:57:55.290991 containerd[1555]: time="2025-07-11T07:57:55.290966093Z" level=info msg="StartContainer for \"8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae\"" Jul 11 07:57:55.292604 containerd[1555]: time="2025-07-11T07:57:55.292580021Z" level=info msg="connecting to shim 8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae" address="unix:///run/containerd/s/3575e69fb55adcf970fa02847d178d300eedaecc36c084576739604e690cf994" protocol=ttrpc version=3 Jul 11 07:57:55.348202 systemd[1]: Started cri-containerd-8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae.scope - libcontainer container 8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae. Jul 11 07:57:55.588765 containerd[1555]: time="2025-07-11T07:57:55.588485932Z" level=info msg="StartContainer for \"8bd79132f5bdaf91a0e06bfac291d1af23b5dd4c07ce9fcbe55780ca9a2552ae\" returns successfully" Jul 11 07:57:57.394540 containerd[1555]: time="2025-07-11T07:57:57.394139121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:57:58.088734 systemd-networkd[1448]: cali898a47a52d0: Link UP Jul 11 07:57:58.090373 systemd-networkd[1448]: cali898a47a52d0: Gained carrier Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:57.904 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0 calico-apiserver-5f8586745b- calico-apiserver 5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212 827 0 2025-07-11 07:57:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f8586745b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4392-0-0-n-91c7dbf1fc.novalocal calico-apiserver-5f8586745b-b7nf2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali898a47a52d0 [] [] }} ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:57.904 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.000 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" HandleID="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.000 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" HandleID="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4392-0-0-n-91c7dbf1fc.novalocal", "pod":"calico-apiserver-5f8586745b-b7nf2", "timestamp":"2025-07-11 07:57:58.000630608 +0000 UTC"}, Hostname:"ci-4392-0-0-n-91c7dbf1fc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.000 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.001 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.001 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-91c7dbf1fc.novalocal' Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.020 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.027 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.035 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.038 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.042 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.042 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.045 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818 Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.058 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.077 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.200/26] block=192.168.85.192/26 handle="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.077 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.200/26] handle="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" host="ci-4392-0-0-n-91c7dbf1fc.novalocal" Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.077 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:57:58.126613 containerd[1555]: 2025-07-11 07:57:58.077 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.200/26] IPv6=[] ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" HandleID="k8s-pod-network.1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Workload="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.081 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0", GenerateName:"calico-apiserver-5f8586745b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f8586745b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"", Pod:"calico-apiserver-5f8586745b-b7nf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali898a47a52d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.082 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.200/32] ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.082 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali898a47a52d0 ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.091 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.094 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0", GenerateName:"calico-apiserver-5f8586745b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f8586745b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-91c7dbf1fc.novalocal", ContainerID:"1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818", Pod:"calico-apiserver-5f8586745b-b7nf2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali898a47a52d0", MAC:"ee:fa:3f:4d:ff:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:57:58.127708 containerd[1555]: 2025-07-11 07:57:58.119 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" Namespace="calico-apiserver" Pod="calico-apiserver-5f8586745b-b7nf2" WorkloadEndpoint="ci--4392--0--0--n--91c7dbf1fc.novalocal-k8s-calico--apiserver--5f8586745b--b7nf2-eth0" Jul 11 07:57:58.205705 containerd[1555]: time="2025-07-11T07:57:58.205329951Z" level=info msg="connecting to shim 1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818" address="unix:///run/containerd/s/58f648509e74d389c782a9acc19bfb12882e910d46ca3591e928412734f89b80" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:57:58.260566 systemd[1]: Started cri-containerd-1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818.scope - libcontainer container 1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818. Jul 11 07:57:58.398518 containerd[1555]: time="2025-07-11T07:57:58.398166937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f8586745b-b7nf2,Uid:5a8ad1f1-2ef2-47cc-bb23-9b286c4c9212,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818\"" Jul 11 07:57:59.664212 systemd-networkd[1448]: cali898a47a52d0: Gained IPv6LL Jul 11 07:57:59.953939 containerd[1555]: time="2025-07-11T07:57:59.953870612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"4db9cb2776cf813717534cd44e402bcda3ace7ee4d08f6f5083386887ebd3b8d\" pid:5026 exited_at:{seconds:1752220679 nanos:951629078}" Jul 11 07:58:00.892119 containerd[1555]: time="2025-07-11T07:58:00.891689658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:00.895221 containerd[1555]: time="2025-07-11T07:58:00.895187191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 07:58:00.896264 containerd[1555]: time="2025-07-11T07:58:00.896219510Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:00.900106 containerd[1555]: time="2025-07-11T07:58:00.900049077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:00.901374 containerd[1555]: time="2025-07-11T07:58:00.900763229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.669319852s" Jul 11 07:58:00.901374 containerd[1555]: time="2025-07-11T07:58:00.901292805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 07:58:00.904670 containerd[1555]: time="2025-07-11T07:58:00.903844847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 07:58:00.907173 containerd[1555]: time="2025-07-11T07:58:00.907126733Z" level=info msg="CreateContainer within sandbox \"1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 07:58:00.932105 containerd[1555]: time="2025-07-11T07:58:00.929425337Z" level=info msg="Container 023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:58:00.951913 containerd[1555]: time="2025-07-11T07:58:00.951847990Z" level=info msg="CreateContainer within sandbox \"1e4deee4d9b3486372bd1fbb3c3d310f8ad724bf24ba726cdfed9a38392d3a78\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6\"" Jul 11 07:58:00.953257 containerd[1555]: time="2025-07-11T07:58:00.953225016Z" level=info msg="StartContainer for \"023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6\"" Jul 11 07:58:00.955545 containerd[1555]: time="2025-07-11T07:58:00.955451047Z" level=info msg="connecting to shim 023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6" address="unix:///run/containerd/s/75941547e77e01f5818c3b875c22b097ecc6899c9b89c9ee7a38e76b3f5243d3" protocol=ttrpc version=3 Jul 11 07:58:01.003297 systemd[1]: Started cri-containerd-023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6.scope - libcontainer container 023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6. Jul 11 07:58:01.133173 containerd[1555]: time="2025-07-11T07:58:01.133047326Z" level=info msg="StartContainer for \"023713d820647391ea4f58419ae07e0cff00b1d680c5fffe6a5b765d31d7c8a6\" returns successfully" Jul 11 07:58:04.189339 kubelet[2815]: I0711 07:58:04.188888 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:58:05.279714 kubelet[2815]: I0711 07:58:05.279422 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f8586745b-56tns" podStartSLOduration=46.157106935 podStartE2EDuration="1m0.278049415s" podCreationTimestamp="2025-07-11 07:57:05 +0000 UTC" firstStartedPulling="2025-07-11 07:57:46.78250134 +0000 UTC m=+57.622912175" lastFinishedPulling="2025-07-11 07:58:00.90344379 +0000 UTC m=+71.743854655" observedRunningTime="2025-07-11 07:58:02.182460661 +0000 UTC m=+73.022871506" watchObservedRunningTime="2025-07-11 07:58:05.278049415 +0000 UTC m=+76.118460261" Jul 11 07:58:08.232597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717332057.mount: Deactivated successfully. Jul 11 07:58:10.123790 containerd[1555]: time="2025-07-11T07:58:10.123685851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:10.126749 containerd[1555]: time="2025-07-11T07:58:10.126705753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 07:58:10.127527 containerd[1555]: time="2025-07-11T07:58:10.127474432Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:10.131427 containerd[1555]: time="2025-07-11T07:58:10.131349500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:10.132428 containerd[1555]: time="2025-07-11T07:58:10.132355917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 9.227488269s" Jul 11 07:58:10.132509 containerd[1555]: time="2025-07-11T07:58:10.132449997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 07:58:10.137547 containerd[1555]: time="2025-07-11T07:58:10.136905031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 07:58:10.168654 containerd[1555]: time="2025-07-11T07:58:10.168593838Z" level=info msg="CreateContainer within sandbox \"4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 07:58:10.193150 containerd[1555]: time="2025-07-11T07:58:10.192896078Z" level=info msg="Container 2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:58:10.219873 containerd[1555]: time="2025-07-11T07:58:10.219801780Z" level=info msg="CreateContainer within sandbox \"4cfadfe0fd9c15b6b450e5f87767ba156b7d0ec08ab4042468db15dcb77d1b51\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\"" Jul 11 07:58:10.221687 containerd[1555]: time="2025-07-11T07:58:10.221632874Z" level=info msg="StartContainer for \"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\"" Jul 11 07:58:10.238339 containerd[1555]: time="2025-07-11T07:58:10.238242269Z" level=info msg="connecting to shim 2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884" address="unix:///run/containerd/s/6b4f12b125b92d2d026e8bfade6968e2f2fae59786fdab7a5728621ead37edc3" protocol=ttrpc version=3 Jul 11 07:58:10.331297 systemd[1]: Started cri-containerd-2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884.scope - libcontainer container 2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884. Jul 11 07:58:10.625801 containerd[1555]: time="2025-07-11T07:58:10.625734595Z" level=info msg="StartContainer for \"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" returns successfully" Jul 11 07:58:11.280579 kubelet[2815]: I0711 07:58:11.279662 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-bfvnk" podStartSLOduration=39.431118714 podStartE2EDuration="1m2.279643421s" podCreationTimestamp="2025-07-11 07:57:09 +0000 UTC" firstStartedPulling="2025-07-11 07:57:47.287835195 +0000 UTC m=+58.128246030" lastFinishedPulling="2025-07-11 07:58:10.136359892 +0000 UTC m=+80.976770737" observedRunningTime="2025-07-11 07:58:11.277288792 +0000 UTC m=+82.117699637" watchObservedRunningTime="2025-07-11 07:58:11.279643421 +0000 UTC m=+82.120054256" Jul 11 07:58:11.421446 containerd[1555]: time="2025-07-11T07:58:11.421387892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"5fa615798a263f7889caebdb514569e44bcee17554bd891838f831242252420e\" pid:5150 exit_status:1 exited_at:{seconds:1752220691 nanos:419666290}" Jul 11 07:58:12.384117 containerd[1555]: time="2025-07-11T07:58:12.384026049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"859aa2bc1e3265292770c6cca5d75d9b73733975c82c3593efd6804df617763c\" pid:5172 exit_status:1 exited_at:{seconds:1752220692 nanos:382821843}" Jul 11 07:58:12.899649 containerd[1555]: time="2025-07-11T07:58:12.899565454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:12.902813 containerd[1555]: time="2025-07-11T07:58:12.902123472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 07:58:12.903959 containerd[1555]: time="2025-07-11T07:58:12.903922952Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:12.910374 containerd[1555]: time="2025-07-11T07:58:12.910296034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:12.912876 containerd[1555]: time="2025-07-11T07:58:12.912438313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.775484308s" Jul 11 07:58:12.912876 containerd[1555]: time="2025-07-11T07:58:12.912495744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 07:58:12.915406 containerd[1555]: time="2025-07-11T07:58:12.915165837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 07:58:12.915683 containerd[1555]: time="2025-07-11T07:58:12.915658634Z" level=info msg="CreateContainer within sandbox \"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 07:58:12.937870 containerd[1555]: time="2025-07-11T07:58:12.936270332Z" level=info msg="Container e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:58:12.957099 containerd[1555]: time="2025-07-11T07:58:12.955903067Z" level=info msg="CreateContainer within sandbox \"ffa9f8cbc50db4e71dcf9a48c1a5e2748623ca85766f247f11fb44e6d3705050\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163\"" Jul 11 07:58:12.958367 containerd[1555]: time="2025-07-11T07:58:12.958322819Z" level=info msg="StartContainer for \"e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163\"" Jul 11 07:58:12.961973 containerd[1555]: time="2025-07-11T07:58:12.961936126Z" level=info msg="connecting to shim e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163" address="unix:///run/containerd/s/2395c68d1b80f39d3a6d501ce3c03a3337afa5aceb67a94b401f5c803939ca4f" protocol=ttrpc version=3 Jul 11 07:58:13.012378 systemd[1]: Started cri-containerd-e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163.scope - libcontainer container e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163. Jul 11 07:58:13.216313 containerd[1555]: time="2025-07-11T07:58:13.216222844Z" level=info msg="StartContainer for \"e9b1d0e8ed3e89a890e2170866c88d9807e75e545f47b9ff774da6b5923af163\" returns successfully" Jul 11 07:58:13.289450 kubelet[2815]: I0711 07:58:13.289357 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v6t7t" podStartSLOduration=34.630046329 podStartE2EDuration="1m3.289328915s" podCreationTimestamp="2025-07-11 07:57:10 +0000 UTC" firstStartedPulling="2025-07-11 07:57:44.25426465 +0000 UTC m=+55.094675485" lastFinishedPulling="2025-07-11 07:58:12.913547226 +0000 UTC m=+83.753958071" observedRunningTime="2025-07-11 07:58:13.2844398 +0000 UTC m=+84.124850635" watchObservedRunningTime="2025-07-11 07:58:13.289328915 +0000 UTC m=+84.129739760" Jul 11 07:58:13.446685 containerd[1555]: time="2025-07-11T07:58:13.446579481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"dfa04fec8beb436bd2bbcbb960fbdb6f7d7ab5301195feadef7fac6621d84d58\" pid:5229 exit_status:1 exited_at:{seconds:1752220693 nanos:445892431}" Jul 11 07:58:13.792177 kubelet[2815]: I0711 07:58:13.791689 2815 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 07:58:13.792177 kubelet[2815]: I0711 07:58:13.791841 2815 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 07:58:16.861980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952958717.mount: Deactivated successfully. Jul 11 07:58:16.899101 containerd[1555]: time="2025-07-11T07:58:16.898939632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:16.901101 containerd[1555]: time="2025-07-11T07:58:16.900498013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 07:58:16.901864 containerd[1555]: time="2025-07-11T07:58:16.901822935Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:16.905577 containerd[1555]: time="2025-07-11T07:58:16.905520350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:16.907469 containerd[1555]: time="2025-07-11T07:58:16.907239819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.992038906s" Jul 11 07:58:16.907469 containerd[1555]: time="2025-07-11T07:58:16.907281680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 07:58:16.910617 containerd[1555]: time="2025-07-11T07:58:16.910565992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 07:58:16.913090 containerd[1555]: time="2025-07-11T07:58:16.912512267Z" level=info msg="CreateContainer within sandbox \"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 07:58:16.929637 containerd[1555]: time="2025-07-11T07:58:16.929574528Z" level=info msg="Container ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:58:16.944271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377966046.mount: Deactivated successfully. Jul 11 07:58:16.954415 containerd[1555]: time="2025-07-11T07:58:16.954346074Z" level=info msg="CreateContainer within sandbox \"60655d4abf337e53b9fe2dee91af3b21fb8a15d968e4bbf1a2d4b30aa5bea4e8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed\"" Jul 11 07:58:16.957018 containerd[1555]: time="2025-07-11T07:58:16.956936556Z" level=info msg="StartContainer for \"ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed\"" Jul 11 07:58:16.959565 containerd[1555]: time="2025-07-11T07:58:16.959472041Z" level=info msg="connecting to shim ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed" address="unix:///run/containerd/s/3575e69fb55adcf970fa02847d178d300eedaecc36c084576739604e690cf994" protocol=ttrpc version=3 Jul 11 07:58:17.003273 systemd[1]: Started cri-containerd-ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed.scope - libcontainer container ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed. Jul 11 07:58:17.179124 containerd[1555]: time="2025-07-11T07:58:17.178273794Z" level=info msg="StartContainer for \"ac4ec46d5a2cf12d12019fa0d0a3d5a3aa15ac68749dc3c70797dec5ab9084ed\" returns successfully" Jul 11 07:58:17.288615 kubelet[2815]: I0711 07:58:17.288520 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78b896cb4b-hc965" podStartSLOduration=2.343048467 podStartE2EDuration="33.288498779s" podCreationTimestamp="2025-07-11 07:57:44 +0000 UTC" firstStartedPulling="2025-07-11 07:57:45.963784305 +0000 UTC m=+56.804195191" lastFinishedPulling="2025-07-11 07:58:16.909234668 +0000 UTC m=+87.749645503" observedRunningTime="2025-07-11 07:58:17.286216811 +0000 UTC m=+88.126627666" watchObservedRunningTime="2025-07-11 07:58:17.288498779 +0000 UTC m=+88.128909614" Jul 11 07:58:17.400272 containerd[1555]: time="2025-07-11T07:58:17.400058815Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:58:17.402058 containerd[1555]: time="2025-07-11T07:58:17.402028523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 07:58:17.404759 containerd[1555]: time="2025-07-11T07:58:17.404712861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 494.104428ms" Jul 11 07:58:17.404841 containerd[1555]: time="2025-07-11T07:58:17.404769490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 07:58:17.409465 containerd[1555]: time="2025-07-11T07:58:17.409405362Z" level=info msg="CreateContainer within sandbox \"1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 07:58:17.449331 containerd[1555]: time="2025-07-11T07:58:17.449291929Z" level=info msg="Container 84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:58:17.465871 containerd[1555]: time="2025-07-11T07:58:17.465743585Z" level=info msg="CreateContainer within sandbox \"1082cd0ca5c068afbf68fe8fc4b4387eb53535665f8e6c7b19f3f7e2887f7818\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6\"" Jul 11 07:58:17.469670 containerd[1555]: time="2025-07-11T07:58:17.469629689Z" level=info msg="StartContainer for \"84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6\"" Jul 11 07:58:17.471247 containerd[1555]: time="2025-07-11T07:58:17.471196895Z" level=info msg="connecting to shim 84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6" address="unix:///run/containerd/s/58f648509e74d389c782a9acc19bfb12882e910d46ca3591e928412734f89b80" protocol=ttrpc version=3 Jul 11 07:58:17.498307 systemd[1]: Started cri-containerd-84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6.scope - libcontainer container 84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6. Jul 11 07:58:17.678101 containerd[1555]: time="2025-07-11T07:58:17.677982435Z" level=info msg="StartContainer for \"84378e9419e22361aa49db0a85eca1d29acb180c3cbe08259cc9de75585295b6\" returns successfully" Jul 11 07:58:17.823152 containerd[1555]: time="2025-07-11T07:58:17.822995819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"c04a797ee9abe9a57aea50b2a4aba786ef1f316a432d4cee7e21124878e86cf8\" pid:5327 exited_at:{seconds:1752220697 nanos:822013495}" Jul 11 07:58:18.303226 kubelet[2815]: I0711 07:58:18.303144 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f8586745b-b7nf2" podStartSLOduration=54.300406164 podStartE2EDuration="1m13.30301502s" podCreationTimestamp="2025-07-11 07:57:05 +0000 UTC" firstStartedPulling="2025-07-11 07:57:58.403583883 +0000 UTC m=+69.243994718" lastFinishedPulling="2025-07-11 07:58:17.406192739 +0000 UTC m=+88.246603574" observedRunningTime="2025-07-11 07:58:18.30036524 +0000 UTC m=+89.140776085" watchObservedRunningTime="2025-07-11 07:58:18.30301502 +0000 UTC m=+89.143425855" Jul 11 07:58:21.448561 containerd[1555]: time="2025-07-11T07:58:21.448468238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"26e488ecb9416770112b3836fb9145505fafbc478e473dacbe9086a52f1def62\" pid:5362 exited_at:{seconds:1752220701 nanos:447268862}" Jul 11 07:58:22.150645 update_engine[1531]: I20250711 07:58:22.148486 1531 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 11 07:58:22.150645 update_engine[1531]: I20250711 07:58:22.148619 1531 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 11 07:58:22.152246 update_engine[1531]: I20250711 07:58:22.151458 1531 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 11 07:58:22.153797 update_engine[1531]: I20250711 07:58:22.153728 1531 omaha_request_params.cc:62] Current group set to developer Jul 11 07:58:22.172756 update_engine[1531]: I20250711 07:58:22.172675 1531 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 11 07:58:22.174170 update_engine[1531]: I20250711 07:58:22.173028 1531 update_attempter.cc:643] Scheduling an action processor start. Jul 11 07:58:22.174170 update_engine[1531]: I20250711 07:58:22.173081 1531 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 07:58:22.189626 update_engine[1531]: I20250711 07:58:22.189570 1531 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 11 07:58:22.189921 update_engine[1531]: I20250711 07:58:22.189898 1531 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 07:58:22.191117 update_engine[1531]: I20250711 07:58:22.190004 1531 omaha_request_action.cc:272] Request: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: Jul 11 07:58:22.191117 update_engine[1531]: I20250711 07:58:22.190032 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:58:22.191706 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 11 07:58:22.199850 update_engine[1531]: I20250711 07:58:22.199791 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:58:22.200657 update_engine[1531]: I20250711 07:58:22.200613 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:58:22.207632 update_engine[1531]: E20250711 07:58:22.207475 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:58:22.208095 update_engine[1531]: I20250711 07:58:22.208048 1531 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 11 07:58:24.168962 systemd[1]: Started sshd@9-172.24.4.10:22-172.24.4.1:43918.service - OpenSSH per-connection server daemon (172.24.4.1:43918). Jul 11 07:58:25.604919 sshd[5378]: Accepted publickey for core from 172.24.4.1 port 43918 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:25.609202 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:25.619068 systemd-logind[1529]: New session 12 of user core. Jul 11 07:58:25.629439 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 07:58:26.549015 sshd[5383]: Connection closed by 172.24.4.1 port 43918 Jul 11 07:58:26.549418 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:26.555893 systemd[1]: sshd@9-172.24.4.10:22-172.24.4.1:43918.service: Deactivated successfully. Jul 11 07:58:26.556559 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jul 11 07:58:26.560158 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 07:58:26.563688 systemd-logind[1529]: Removed session 12. Jul 11 07:58:29.844370 containerd[1555]: time="2025-07-11T07:58:29.843950893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"24e5a86a9a30919289f6c3d96a8377b088942cf411c6baa9abfbc6eaf6d862d3\" pid:5433 exited_at:{seconds:1752220709 nanos:843590885}" Jul 11 07:58:29.947534 containerd[1555]: time="2025-07-11T07:58:29.947480809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"f544afda3abc10d07c33cd36313c2193fd9a42298010989627bd8845c490332c\" pid:5424 exited_at:{seconds:1752220709 nanos:945548749}" Jul 11 07:58:31.583202 systemd[1]: Started sshd@10-172.24.4.10:22-172.24.4.1:43930.service - OpenSSH per-connection server daemon (172.24.4.1:43930). Jul 11 07:58:32.150803 update_engine[1531]: I20250711 07:58:32.149217 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:58:32.150803 update_engine[1531]: I20250711 07:58:32.150517 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:58:32.151852 update_engine[1531]: I20250711 07:58:32.151587 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:58:32.156885 update_engine[1531]: E20250711 07:58:32.156736 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:58:32.156885 update_engine[1531]: I20250711 07:58:32.156843 1531 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 11 07:58:32.872314 sshd[5448]: Accepted publickey for core from 172.24.4.1 port 43930 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:32.875005 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:32.887432 systemd-logind[1529]: New session 13 of user core. Jul 11 07:58:32.895182 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 07:58:33.596465 sshd[5451]: Connection closed by 172.24.4.1 port 43930 Jul 11 07:58:33.598533 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:33.620066 systemd[1]: sshd@10-172.24.4.10:22-172.24.4.1:43930.service: Deactivated successfully. Jul 11 07:58:33.635764 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 07:58:33.642980 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jul 11 07:58:33.650955 systemd-logind[1529]: Removed session 13. Jul 11 07:58:38.619680 systemd[1]: Started sshd@11-172.24.4.10:22-172.24.4.1:60740.service - OpenSSH per-connection server daemon (172.24.4.1:60740). Jul 11 07:58:39.742391 sshd[5466]: Accepted publickey for core from 172.24.4.1 port 60740 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:39.745937 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:39.755914 systemd-logind[1529]: New session 14 of user core. Jul 11 07:58:39.763264 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 07:58:40.494221 sshd[5469]: Connection closed by 172.24.4.1 port 60740 Jul 11 07:58:40.494967 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:40.508909 systemd[1]: sshd@11-172.24.4.10:22-172.24.4.1:60740.service: Deactivated successfully. Jul 11 07:58:40.515031 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 07:58:40.517306 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jul 11 07:58:40.524291 systemd[1]: Started sshd@12-172.24.4.10:22-172.24.4.1:60752.service - OpenSSH per-connection server daemon (172.24.4.1:60752). Jul 11 07:58:40.527194 systemd-logind[1529]: Removed session 14. Jul 11 07:58:41.696529 sshd[5483]: Accepted publickey for core from 172.24.4.1 port 60752 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:41.698898 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:41.707254 systemd-logind[1529]: New session 15 of user core. Jul 11 07:58:41.714242 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 07:58:42.146184 update_engine[1531]: I20250711 07:58:42.143190 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:58:42.146184 update_engine[1531]: I20250711 07:58:42.143754 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:58:42.147436 update_engine[1531]: I20250711 07:58:42.147392 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:58:42.152593 update_engine[1531]: E20250711 07:58:42.152525 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:58:42.152917 update_engine[1531]: I20250711 07:58:42.152892 1531 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 11 07:58:42.556790 sshd[5486]: Connection closed by 172.24.4.1 port 60752 Jul 11 07:58:42.557295 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:42.570986 systemd[1]: sshd@12-172.24.4.10:22-172.24.4.1:60752.service: Deactivated successfully. Jul 11 07:58:42.576797 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 07:58:42.578529 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jul 11 07:58:42.586363 systemd[1]: Started sshd@13-172.24.4.10:22-172.24.4.1:60760.service - OpenSSH per-connection server daemon (172.24.4.1:60760). Jul 11 07:58:42.590584 systemd-logind[1529]: Removed session 15. Jul 11 07:58:43.852153 sshd[5496]: Accepted publickey for core from 172.24.4.1 port 60760 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:43.855948 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:43.866534 systemd-logind[1529]: New session 16 of user core. Jul 11 07:58:43.872249 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 07:58:44.607192 sshd[5499]: Connection closed by 172.24.4.1 port 60760 Jul 11 07:58:44.607575 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:44.613518 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jul 11 07:58:44.615441 systemd[1]: sshd@13-172.24.4.10:22-172.24.4.1:60760.service: Deactivated successfully. Jul 11 07:58:44.619871 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 07:58:44.623408 systemd-logind[1529]: Removed session 16. Jul 11 07:58:49.628630 systemd[1]: Started sshd@14-172.24.4.10:22-172.24.4.1:42348.service - OpenSSH per-connection server daemon (172.24.4.1:42348). Jul 11 07:58:50.797120 sshd[5515]: Accepted publickey for core from 172.24.4.1 port 42348 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:50.800265 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:50.814002 systemd-logind[1529]: New session 17 of user core. Jul 11 07:58:50.826449 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 07:58:51.309900 containerd[1555]: time="2025-07-11T07:58:51.309340678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"c27cffbbe18a78b439e153d1c91d45fd38de40a94c3aaa0545870b6f1d469594\" pid:5533 exited_at:{seconds:1752220731 nanos:306338350}" Jul 11 07:58:51.656888 sshd[5518]: Connection closed by 172.24.4.1 port 42348 Jul 11 07:58:51.658929 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:51.666306 systemd[1]: sshd@14-172.24.4.10:22-172.24.4.1:42348.service: Deactivated successfully. Jul 11 07:58:51.669863 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 07:58:51.672151 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jul 11 07:58:51.674097 systemd-logind[1529]: Removed session 17. Jul 11 07:58:52.143392 update_engine[1531]: I20250711 07:58:52.143242 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:58:52.145312 update_engine[1531]: I20250711 07:58:52.143849 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:58:52.146782 update_engine[1531]: I20250711 07:58:52.146543 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:58:52.152249 update_engine[1531]: E20250711 07:58:52.151743 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:58:52.154365 update_engine[1531]: I20250711 07:58:52.153990 1531 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 07:58:52.154365 update_engine[1531]: I20250711 07:58:52.154328 1531 omaha_request_action.cc:617] Omaha request response: Jul 11 07:58:52.154743 update_engine[1531]: E20250711 07:58:52.154639 1531 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 11 07:58:52.157148 update_engine[1531]: I20250711 07:58:52.155421 1531 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 11 07:58:52.157148 update_engine[1531]: I20250711 07:58:52.157130 1531 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:58:52.157416 update_engine[1531]: I20250711 07:58:52.157166 1531 update_attempter.cc:306] Processing Done. Jul 11 07:58:52.157416 update_engine[1531]: E20250711 07:58:52.157266 1531 update_attempter.cc:619] Update failed. Jul 11 07:58:52.157416 update_engine[1531]: I20250711 07:58:52.157296 1531 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 11 07:58:52.157416 update_engine[1531]: I20250711 07:58:52.157310 1531 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 11 07:58:52.157416 update_engine[1531]: I20250711 07:58:52.157322 1531 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 11 07:58:52.158476 update_engine[1531]: I20250711 07:58:52.158385 1531 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 07:58:52.158641 update_engine[1531]: I20250711 07:58:52.158562 1531 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 07:58:52.158641 update_engine[1531]: I20250711 07:58:52.158581 1531 omaha_request_action.cc:272] Request: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: Jul 11 07:58:52.158641 update_engine[1531]: I20250711 07:58:52.158596 1531 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:58:52.160385 update_engine[1531]: I20250711 07:58:52.159029 1531 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:58:52.163888 update_engine[1531]: I20250711 07:58:52.163581 1531 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:58:52.168170 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 11 07:58:52.169710 update_engine[1531]: E20250711 07:58:52.168871 1531 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.168975 1531 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.168993 1531 omaha_request_action.cc:617] Omaha request response: Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.169008 1531 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.169019 1531 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.169032 1531 update_attempter.cc:306] Processing Done. Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.169045 1531 update_attempter.cc:310] Error event sent. Jul 11 07:58:52.169710 update_engine[1531]: I20250711 07:58:52.169198 1531 update_check_scheduler.cc:74] Next update check in 49m56s Jul 11 07:58:52.173754 locksmithd[1566]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 11 07:58:56.687447 systemd[1]: Started sshd@15-172.24.4.10:22-172.24.4.1:35488.service - OpenSSH per-connection server daemon (172.24.4.1:35488). Jul 11 07:58:57.901900 sshd[5564]: Accepted publickey for core from 172.24.4.1 port 35488 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:58:57.907342 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:58:57.920842 systemd-logind[1529]: New session 18 of user core. Jul 11 07:58:57.932496 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 07:58:58.626170 sshd[5567]: Connection closed by 172.24.4.1 port 35488 Jul 11 07:58:58.626971 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jul 11 07:58:58.632447 systemd[1]: sshd@15-172.24.4.10:22-172.24.4.1:35488.service: Deactivated successfully. Jul 11 07:58:58.636007 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 07:58:58.639702 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jul 11 07:58:58.642349 systemd-logind[1529]: Removed session 18. Jul 11 07:58:59.843413 containerd[1555]: time="2025-07-11T07:58:59.843334750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"79e9bc82a9634d1d77c62cd4646599673993eff4381d9500cbc6a13276ecde73\" pid:5601 exited_at:{seconds:1752220739 nanos:842827747}" Jul 11 07:58:59.902991 containerd[1555]: time="2025-07-11T07:58:59.902926241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"561c4117e231e1048d2e8b2611168140842e9649914ab203a94b9976cbff718f\" pid:5610 exited_at:{seconds:1752220739 nanos:902062430}" Jul 11 07:59:03.334756 containerd[1555]: time="2025-07-11T07:59:03.334677845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"45ca58e056866305d1eedf0dad9c9a72476516b6edd22e66f75aea79961d6afd\" pid:5636 exited_at:{seconds:1752220743 nanos:333297164}" Jul 11 07:59:03.641518 systemd[1]: Started sshd@16-172.24.4.10:22-172.24.4.1:38602.service - OpenSSH per-connection server daemon (172.24.4.1:38602). Jul 11 07:59:04.743100 sshd[5648]: Accepted publickey for core from 172.24.4.1 port 38602 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:04.746776 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:04.758220 systemd-logind[1529]: New session 19 of user core. Jul 11 07:59:04.762421 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 07:59:05.635880 sshd[5651]: Connection closed by 172.24.4.1 port 38602 Jul 11 07:59:05.637704 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:05.644497 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jul 11 07:59:05.644681 systemd[1]: sshd@16-172.24.4.10:22-172.24.4.1:38602.service: Deactivated successfully. Jul 11 07:59:05.647907 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 07:59:05.651649 systemd-logind[1529]: Removed session 19. Jul 11 07:59:10.657352 systemd[1]: Started sshd@17-172.24.4.10:22-172.24.4.1:38616.service - OpenSSH per-connection server daemon (172.24.4.1:38616). Jul 11 07:59:11.827508 sshd[5669]: Accepted publickey for core from 172.24.4.1 port 38616 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:11.831009 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:11.839461 systemd-logind[1529]: New session 20 of user core. Jul 11 07:59:11.844323 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 07:59:12.596793 sshd[5672]: Connection closed by 172.24.4.1 port 38616 Jul 11 07:59:12.596213 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:12.619533 systemd[1]: sshd@17-172.24.4.10:22-172.24.4.1:38616.service: Deactivated successfully. Jul 11 07:59:12.631399 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 07:59:12.634847 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jul 11 07:59:12.643306 systemd-logind[1529]: Removed session 20. Jul 11 07:59:12.645975 systemd[1]: Started sshd@18-172.24.4.10:22-172.24.4.1:38618.service - OpenSSH per-connection server daemon (172.24.4.1:38618). Jul 11 07:59:13.800337 sshd[5684]: Accepted publickey for core from 172.24.4.1 port 38618 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:13.804444 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:13.815169 systemd-logind[1529]: New session 21 of user core. Jul 11 07:59:13.822413 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 07:59:14.891193 sshd[5687]: Connection closed by 172.24.4.1 port 38618 Jul 11 07:59:14.891409 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:14.905185 systemd[1]: sshd@18-172.24.4.10:22-172.24.4.1:38618.service: Deactivated successfully. Jul 11 07:59:14.908980 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 07:59:14.912871 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jul 11 07:59:14.919189 systemd[1]: Started sshd@19-172.24.4.10:22-172.24.4.1:46598.service - OpenSSH per-connection server daemon (172.24.4.1:46598). Jul 11 07:59:14.922506 systemd-logind[1529]: Removed session 21. Jul 11 07:59:16.042593 sshd[5697]: Accepted publickey for core from 172.24.4.1 port 46598 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:16.045137 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:16.054730 systemd-logind[1529]: New session 22 of user core. Jul 11 07:59:16.063243 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 07:59:17.895153 containerd[1555]: time="2025-07-11T07:59:17.894777768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"da1c25bb208cfad812df5b8347521a48df320ad8189d00e4df6c6f891bd55cd3\" pid:5726 exited_at:{seconds:1752220757 nanos:893579045}" Jul 11 07:59:20.562159 sshd[5700]: Connection closed by 172.24.4.1 port 46598 Jul 11 07:59:20.564238 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:20.578920 systemd[1]: sshd@19-172.24.4.10:22-172.24.4.1:46598.service: Deactivated successfully. Jul 11 07:59:20.584236 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 07:59:20.584780 systemd[1]: session-22.scope: Consumed 912ms CPU time, 77.4M memory peak. Jul 11 07:59:20.588532 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jul 11 07:59:20.592359 systemd[1]: Started sshd@20-172.24.4.10:22-172.24.4.1:46600.service - OpenSSH per-connection server daemon (172.24.4.1:46600). Jul 11 07:59:20.596785 systemd-logind[1529]: Removed session 22. Jul 11 07:59:21.645156 containerd[1555]: time="2025-07-11T07:59:21.645052509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"0dd35e9a073d162279a49bcfe09eac76e70a94065621c1685597551f58d8403f\" pid:5779 exited_at:{seconds:1752220761 nanos:642556257}" Jul 11 07:59:21.797584 sshd[5763]: Accepted publickey for core from 172.24.4.1 port 46600 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:21.801974 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:21.812141 systemd-logind[1529]: New session 23 of user core. Jul 11 07:59:21.822681 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 07:59:22.739267 sshd[5791]: Connection closed by 172.24.4.1 port 46600 Jul 11 07:59:22.739518 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:22.751035 systemd[1]: sshd@20-172.24.4.10:22-172.24.4.1:46600.service: Deactivated successfully. Jul 11 07:59:22.754765 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 07:59:22.756917 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jul 11 07:59:22.763694 systemd[1]: Started sshd@21-172.24.4.10:22-172.24.4.1:46612.service - OpenSSH per-connection server daemon (172.24.4.1:46612). Jul 11 07:59:22.767429 systemd-logind[1529]: Removed session 23. Jul 11 07:59:24.030985 sshd[5801]: Accepted publickey for core from 172.24.4.1 port 46612 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:24.036579 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:24.050837 systemd-logind[1529]: New session 24 of user core. Jul 11 07:59:24.058502 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 07:59:24.854155 sshd[5804]: Connection closed by 172.24.4.1 port 46612 Jul 11 07:59:24.854603 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:24.860325 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Jul 11 07:59:24.862471 systemd[1]: sshd@21-172.24.4.10:22-172.24.4.1:46612.service: Deactivated successfully. Jul 11 07:59:24.865261 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 07:59:24.869291 systemd-logind[1529]: Removed session 24. Jul 11 07:59:29.877825 systemd[1]: Started sshd@22-172.24.4.10:22-172.24.4.1:57676.service - OpenSSH per-connection server daemon (172.24.4.1:57676). Jul 11 07:59:29.956829 containerd[1555]: time="2025-07-11T07:59:29.956769307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"458bb26477469ac84421e00e3478d6bbc6c4544f97622c44f0639f423fe1a860\" pid:5849 exited_at:{seconds:1752220769 nanos:914309719}" Jul 11 07:59:29.992196 containerd[1555]: time="2025-07-11T07:59:29.992058859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"34aa93adc85698b84b87e586b41bd26e58772a070f4f36151ca5c6467cdb5232\" pid:5836 exited_at:{seconds:1752220769 nanos:989872416}" Jul 11 07:59:30.864061 sshd[5860]: Accepted publickey for core from 172.24.4.1 port 57676 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:30.866563 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:30.874237 systemd-logind[1529]: New session 25 of user core. Jul 11 07:59:30.881364 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 07:59:31.715926 sshd[5870]: Connection closed by 172.24.4.1 port 57676 Jul 11 07:59:31.716986 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:31.722561 systemd[1]: sshd@22-172.24.4.10:22-172.24.4.1:57676.service: Deactivated successfully. Jul 11 07:59:31.728400 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 07:59:31.730267 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Jul 11 07:59:31.732960 systemd-logind[1529]: Removed session 25. Jul 11 07:59:36.730322 systemd[1]: Started sshd@23-172.24.4.10:22-172.24.4.1:41234.service - OpenSSH per-connection server daemon (172.24.4.1:41234). Jul 11 07:59:37.862629 sshd[5881]: Accepted publickey for core from 172.24.4.1 port 41234 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:37.865703 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:37.878038 systemd-logind[1529]: New session 26 of user core. Jul 11 07:59:37.883636 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 07:59:38.607446 sshd[5884]: Connection closed by 172.24.4.1 port 41234 Jul 11 07:59:38.608306 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:38.614528 systemd-logind[1529]: Session 26 logged out. Waiting for processes to exit. Jul 11 07:59:38.614947 systemd[1]: sshd@23-172.24.4.10:22-172.24.4.1:41234.service: Deactivated successfully. Jul 11 07:59:38.617712 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 07:59:38.621168 systemd-logind[1529]: Removed session 26. Jul 11 07:59:43.624337 systemd[1]: Started sshd@24-172.24.4.10:22-172.24.4.1:51244.service - OpenSSH per-connection server daemon (172.24.4.1:51244). Jul 11 07:59:44.795106 sshd[5897]: Accepted publickey for core from 172.24.4.1 port 51244 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:44.796380 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:44.801942 systemd-logind[1529]: New session 27 of user core. Jul 11 07:59:44.810448 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 07:59:45.518414 sshd[5900]: Connection closed by 172.24.4.1 port 51244 Jul 11 07:59:45.519185 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:45.524243 systemd-logind[1529]: Session 27 logged out. Waiting for processes to exit. Jul 11 07:59:45.525922 systemd[1]: sshd@24-172.24.4.10:22-172.24.4.1:51244.service: Deactivated successfully. Jul 11 07:59:45.528474 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 07:59:45.532030 systemd-logind[1529]: Removed session 27. Jul 11 07:59:50.536502 systemd[1]: Started sshd@25-172.24.4.10:22-172.24.4.1:51246.service - OpenSSH per-connection server daemon (172.24.4.1:51246). Jul 11 07:59:51.317458 containerd[1555]: time="2025-07-11T07:59:51.317398563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b64067bf9aa389ef32a0a42fccbd24435765f70308a9933d508c58040e2d23bc\" id:\"eec44bf07c8800e9f35b1cd47900b1af4447dea4fc57e7e843d529ed49bad357\" pid:5931 exited_at:{seconds:1752220791 nanos:315767605}" Jul 11 07:59:51.642108 sshd[5915]: Accepted publickey for core from 172.24.4.1 port 51246 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:51.643375 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:51.651030 systemd-logind[1529]: New session 28 of user core. Jul 11 07:59:51.656470 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 07:59:52.323857 sshd[5942]: Connection closed by 172.24.4.1 port 51246 Jul 11 07:59:52.325326 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:52.331696 systemd[1]: sshd@25-172.24.4.10:22-172.24.4.1:51246.service: Deactivated successfully. Jul 11 07:59:52.336406 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 07:59:52.339815 systemd-logind[1529]: Session 28 logged out. Waiting for processes to exit. Jul 11 07:59:52.341851 systemd-logind[1529]: Removed session 28. Jul 11 07:59:57.369114 systemd[1]: Started sshd@26-172.24.4.10:22-172.24.4.1:47176.service - OpenSSH per-connection server daemon (172.24.4.1:47176). Jul 11 07:59:58.516566 sshd[5957]: Accepted publickey for core from 172.24.4.1 port 47176 ssh2: RSA SHA256:TBN55DiYZqw18j7dYGDcmcvUUHrSu+zIHZu2SkqTFY8 Jul 11 07:59:58.521466 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:59:58.542625 systemd-logind[1529]: New session 29 of user core. Jul 11 07:59:58.559450 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 07:59:59.378227 sshd[5960]: Connection closed by 172.24.4.1 port 47176 Jul 11 07:59:59.377524 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Jul 11 07:59:59.392532 systemd[1]: sshd@26-172.24.4.10:22-172.24.4.1:47176.service: Deactivated successfully. Jul 11 07:59:59.409186 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 07:59:59.412249 systemd-logind[1529]: Session 29 logged out. Waiting for processes to exit. Jul 11 07:59:59.418966 systemd-logind[1529]: Removed session 29. Jul 11 07:59:59.844827 containerd[1555]: time="2025-07-11T07:59:59.844750706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9169a96d7deecc0f6cdd4eeb11d1b98db6bf588f46a4fa798f4bf2e0d98fcdda\" id:\"3425dfe03e6150360eb8a1562dc6f4cb358b5161ae17171549cf3897b6463b98\" pid:5995 exited_at:{seconds:1752220799 nanos:844061682}" Jul 11 07:59:59.927788 containerd[1555]: time="2025-07-11T07:59:59.927703985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"b428872e87f834fb2b44786eb4d31a0b972fe7e36ff157f59a779bab2240d2bf\" pid:5993 exited_at:{seconds:1752220799 nanos:927104088}" Jul 11 08:00:03.431257 containerd[1555]: time="2025-07-11T08:00:03.431165041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2299f2614d5b62ac59c6f083c83dad181e4f4572f0b84fba41aa10768c05b884\" id:\"37ec4cc43b8c1c5e1d033b2ea028a8e90500d3789b4213c8347d8bff268d3f1f\" pid:6026 exited_at:{seconds:1752220803 nanos:430504310}"