Jul 10 07:52:09.193052 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 03:48:39 -00 2025 Jul 10 07:52:09.193157 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 07:52:09.193170 kernel: BIOS-provided physical RAM map: Jul 10 07:52:09.193185 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 07:52:09.193194 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 07:52:09.193203 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 07:52:09.193214 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 10 07:52:09.193242 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 10 07:52:09.193252 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 07:52:09.193261 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 07:52:09.193270 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 10 07:52:09.193280 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 07:52:09.193292 kernel: NX (Execute Disable) protection: active Jul 10 07:52:09.193301 kernel: APIC: Static calls initialized Jul 10 07:52:09.193312 kernel: SMBIOS 3.0.0 present. Jul 10 07:52:09.193322 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 10 07:52:09.193337 kernel: DMI: Memory slots populated: 1/1 Jul 10 07:52:09.193349 kernel: Hypervisor detected: KVM Jul 10 07:52:09.193359 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 07:52:09.193369 kernel: kvm-clock: using sched offset of 7086918289 cycles Jul 10 07:52:09.193379 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 07:52:09.193390 kernel: tsc: Detected 1996.249 MHz processor Jul 10 07:52:09.193400 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 07:52:09.193411 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 07:52:09.193421 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 10 07:52:09.193431 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 10 07:52:09.193444 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 07:52:09.193454 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 10 07:52:09.193464 kernel: ACPI: Early table checksum verification disabled Jul 10 07:52:09.193474 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 10 07:52:09.193484 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 07:52:09.193494 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 07:52:09.193504 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 07:52:09.193514 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 10 07:52:09.193524 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 07:52:09.193537 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 07:52:09.193547 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 10 07:52:09.193557 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 10 07:52:09.193567 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 10 07:52:09.193577 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 10 07:52:09.193592 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 10 07:52:09.193602 kernel: No NUMA configuration found Jul 10 07:52:09.193614 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 10 07:52:09.193625 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 10 07:52:09.193635 kernel: Zone ranges: Jul 10 07:52:09.193646 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 07:52:09.193656 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 07:52:09.193666 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 10 07:52:09.193676 kernel: Device empty Jul 10 07:52:09.193687 kernel: Movable zone start for each node Jul 10 07:52:09.193700 kernel: Early memory node ranges Jul 10 07:52:09.193710 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 07:52:09.193720 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 10 07:52:09.193773 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 10 07:52:09.193784 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 10 07:52:09.193794 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 07:52:09.193805 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 07:52:09.193815 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 10 07:52:09.193826 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 07:52:09.193839 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 07:52:09.193850 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 07:52:09.193867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 07:52:09.193878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 07:52:09.193893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 07:52:09.193903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 07:52:09.193914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 07:52:09.193943 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 07:52:09.193954 kernel: CPU topo: Max. logical packages: 2 Jul 10 07:52:09.193967 kernel: CPU topo: Max. logical dies: 2 Jul 10 07:52:09.193978 kernel: CPU topo: Max. dies per package: 1 Jul 10 07:52:09.193988 kernel: CPU topo: Max. threads per core: 1 Jul 10 07:52:09.193998 kernel: CPU topo: Num. cores per package: 1 Jul 10 07:52:09.194008 kernel: CPU topo: Num. threads per package: 1 Jul 10 07:52:09.194019 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 07:52:09.194029 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 07:52:09.194040 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 10 07:52:09.194050 kernel: Booting paravirtualized kernel on KVM Jul 10 07:52:09.194063 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 07:52:09.194073 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 07:52:09.194084 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 07:52:09.194094 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 07:52:09.194105 kernel: pcpu-alloc: [0] 0 1 Jul 10 07:52:09.194115 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 10 07:52:09.194127 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 07:52:09.194138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 07:52:09.194150 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 07:52:09.194161 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 07:52:09.194171 kernel: Fallback order for Node 0: 0 Jul 10 07:52:09.194182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 10 07:52:09.194192 kernel: Policy zone: Normal Jul 10 07:52:09.194202 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 07:52:09.194213 kernel: software IO TLB: area num 2. Jul 10 07:52:09.194223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 07:52:09.194234 kernel: ftrace: allocating 40097 entries in 157 pages Jul 10 07:52:09.194246 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 07:52:09.194256 kernel: Dynamic Preempt: voluntary Jul 10 07:52:09.194267 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 07:52:09.194279 kernel: rcu: RCU event tracing is enabled. Jul 10 07:52:09.194289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 07:52:09.194300 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 07:52:09.194310 kernel: Rude variant of Tasks RCU enabled. Jul 10 07:52:09.194320 kernel: Tracing variant of Tasks RCU enabled. Jul 10 07:52:09.194330 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 07:52:09.194341 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 07:52:09.194354 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 07:52:09.194365 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 07:52:09.194376 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 07:52:09.194386 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 07:52:09.194397 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 07:52:09.194413 kernel: Console: colour VGA+ 80x25 Jul 10 07:52:09.194424 kernel: printk: legacy console [tty0] enabled Jul 10 07:52:09.194434 kernel: printk: legacy console [ttyS0] enabled Jul 10 07:52:09.194444 kernel: ACPI: Core revision 20240827 Jul 10 07:52:09.194457 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 07:52:09.194467 kernel: x2apic enabled Jul 10 07:52:09.194478 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 07:52:09.194488 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 07:52:09.194499 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 10 07:52:09.194517 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 10 07:52:09.194530 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 07:52:09.194541 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 07:52:09.194552 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 07:52:09.194563 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 07:52:09.194591 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 07:52:09.194605 kernel: Speculative Store Bypass: Vulnerable Jul 10 07:52:09.194617 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 10 07:52:09.194627 kernel: Freeing SMP alternatives memory: 32K Jul 10 07:52:09.194638 kernel: pid_max: default: 32768 minimum: 301 Jul 10 07:52:09.194649 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 07:52:09.194663 kernel: landlock: Up and running. Jul 10 07:52:09.194674 kernel: SELinux: Initializing. Jul 10 07:52:09.194685 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 07:52:09.194696 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 07:52:09.194707 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 10 07:52:09.194718 kernel: Performance Events: AMD PMU driver. Jul 10 07:52:09.194781 kernel: ... version: 0 Jul 10 07:52:09.194793 kernel: ... bit width: 48 Jul 10 07:52:09.194804 kernel: ... generic registers: 4 Jul 10 07:52:09.194818 kernel: ... value mask: 0000ffffffffffff Jul 10 07:52:09.194829 kernel: ... max period: 00007fffffffffff Jul 10 07:52:09.194840 kernel: ... fixed-purpose events: 0 Jul 10 07:52:09.194851 kernel: ... event mask: 000000000000000f Jul 10 07:52:09.194861 kernel: signal: max sigframe size: 1440 Jul 10 07:52:09.194872 kernel: rcu: Hierarchical SRCU implementation. Jul 10 07:52:09.194883 kernel: rcu: Max phase no-delay instances is 400. Jul 10 07:52:09.194894 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 07:52:09.194905 kernel: smp: Bringing up secondary CPUs ... Jul 10 07:52:09.194916 kernel: smpboot: x86: Booting SMP configuration: Jul 10 07:52:09.194929 kernel: .... node #0, CPUs: #1 Jul 10 07:52:09.194939 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 07:52:09.194951 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 10 07:52:09.194966 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54600K init, 2368K bss, 227296K reserved, 0K cma-reserved) Jul 10 07:52:09.194979 kernel: devtmpfs: initialized Jul 10 07:52:09.194991 kernel: x86/mm: Memory block size: 128MB Jul 10 07:52:09.195002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 07:52:09.195013 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 07:52:09.195035 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 07:52:09.195046 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 07:52:09.195057 kernel: audit: initializing netlink subsys (disabled) Jul 10 07:52:09.195068 kernel: audit: type=2000 audit(1752133924.906:1): state=initialized audit_enabled=0 res=1 Jul 10 07:52:09.195079 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 07:52:09.195090 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 07:52:09.195100 kernel: cpuidle: using governor menu Jul 10 07:52:09.195111 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 07:52:09.195122 kernel: dca service started, version 1.12.1 Jul 10 07:52:09.195136 kernel: PCI: Using configuration type 1 for base access Jul 10 07:52:09.195147 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 07:52:09.195158 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 07:52:09.195169 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 07:52:09.195179 kernel: ACPI: Added _OSI(Module Device) Jul 10 07:52:09.195190 kernel: ACPI: Added _OSI(Processor Device) Jul 10 07:52:09.195201 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 07:52:09.195212 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 07:52:09.195222 kernel: ACPI: Interpreter enabled Jul 10 07:52:09.195233 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 07:52:09.195246 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 07:52:09.195257 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 07:52:09.195268 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 07:52:09.195279 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 10 07:52:09.195290 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 07:52:09.195671 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 10 07:52:09.195915 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 10 07:52:09.196084 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 10 07:52:09.196107 kernel: acpiphp: Slot [3] registered Jul 10 07:52:09.196119 kernel: acpiphp: Slot [4] registered Jul 10 07:52:09.196131 kernel: acpiphp: Slot [5] registered Jul 10 07:52:09.196169 kernel: acpiphp: Slot [6] registered Jul 10 07:52:09.196181 kernel: acpiphp: Slot [7] registered Jul 10 07:52:09.196192 kernel: acpiphp: Slot [8] registered Jul 10 07:52:09.196203 kernel: acpiphp: Slot [9] registered Jul 10 07:52:09.196213 kernel: acpiphp: Slot [10] registered Jul 10 07:52:09.196230 kernel: acpiphp: Slot [11] registered Jul 10 07:52:09.196241 kernel: acpiphp: Slot [12] registered Jul 10 07:52:09.196251 kernel: acpiphp: Slot [13] registered Jul 10 07:52:09.196262 kernel: acpiphp: Slot [14] registered Jul 10 07:52:09.196273 kernel: acpiphp: Slot [15] registered Jul 10 07:52:09.196284 kernel: acpiphp: Slot [16] registered Jul 10 07:52:09.196295 kernel: acpiphp: Slot [17] registered Jul 10 07:52:09.196306 kernel: acpiphp: Slot [18] registered Jul 10 07:52:09.196317 kernel: acpiphp: Slot [19] registered Jul 10 07:52:09.196330 kernel: acpiphp: Slot [20] registered Jul 10 07:52:09.196341 kernel: acpiphp: Slot [21] registered Jul 10 07:52:09.196357 kernel: acpiphp: Slot [22] registered Jul 10 07:52:09.196368 kernel: acpiphp: Slot [23] registered Jul 10 07:52:09.196379 kernel: acpiphp: Slot [24] registered Jul 10 07:52:09.196390 kernel: acpiphp: Slot [25] registered Jul 10 07:52:09.196401 kernel: acpiphp: Slot [26] registered Jul 10 07:52:09.196411 kernel: acpiphp: Slot [27] registered Jul 10 07:52:09.196422 kernel: acpiphp: Slot [28] registered Jul 10 07:52:09.196433 kernel: acpiphp: Slot [29] registered Jul 10 07:52:09.196447 kernel: acpiphp: Slot [30] registered Jul 10 07:52:09.196458 kernel: acpiphp: Slot [31] registered Jul 10 07:52:09.196469 kernel: PCI host bridge to bus 0000:00 Jul 10 07:52:09.196666 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 07:52:09.196799 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 07:52:09.196895 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 07:52:09.196984 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 07:52:09.197081 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 10 07:52:09.197170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 07:52:09.197405 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 10 07:52:09.197560 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 10 07:52:09.207194 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 10 07:52:09.207370 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 10 07:52:09.207493 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 10 07:52:09.207601 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 10 07:52:09.207710 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 10 07:52:09.207869 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 10 07:52:09.208021 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 10 07:52:09.208132 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 10 07:52:09.208241 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 10 07:52:09.208446 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 10 07:52:09.208563 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 10 07:52:09.208675 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 10 07:52:09.208810 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 10 07:52:09.208923 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 10 07:52:09.209121 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 07:52:09.209262 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 07:52:09.209384 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 10 07:52:09.209494 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 10 07:52:09.209602 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 10 07:52:09.209757 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 10 07:52:09.209897 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 07:52:09.210013 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 10 07:52:09.210123 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 10 07:52:09.210240 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 10 07:52:09.210415 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 07:52:09.210530 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 10 07:52:09.210640 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 10 07:52:09.213865 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 07:52:09.214046 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 10 07:52:09.214167 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 10 07:52:09.214274 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 10 07:52:09.214289 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 07:52:09.214300 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 07:52:09.214310 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 07:52:09.214321 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 07:52:09.214332 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 10 07:52:09.214342 kernel: iommu: Default domain type: Translated Jul 10 07:52:09.214352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 07:52:09.214367 kernel: PCI: Using ACPI for IRQ routing Jul 10 07:52:09.214377 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 07:52:09.214388 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 07:52:09.214398 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 10 07:52:09.214504 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 10 07:52:09.214609 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 10 07:52:09.214712 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 07:52:09.214743 kernel: vgaarb: loaded Jul 10 07:52:09.214754 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 07:52:09.214769 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 07:52:09.214779 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 07:52:09.214789 kernel: pnp: PnP ACPI init Jul 10 07:52:09.214938 kernel: pnp 00:03: [dma 2] Jul 10 07:52:09.214956 kernel: pnp: PnP ACPI: found 5 devices Jul 10 07:52:09.214966 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 07:52:09.214976 kernel: NET: Registered PF_INET protocol family Jul 10 07:52:09.214987 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 07:52:09.215001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 07:52:09.215011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 07:52:09.215022 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 07:52:09.215031 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 07:52:09.215041 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 07:52:09.215051 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 07:52:09.215061 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 07:52:09.215071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 07:52:09.215081 kernel: NET: Registered PF_XDP protocol family Jul 10 07:52:09.215185 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 07:52:09.215276 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 07:52:09.215366 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 07:52:09.215456 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 10 07:52:09.215544 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 10 07:52:09.215651 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 10 07:52:09.216011 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 07:52:09.216031 kernel: PCI: CLS 0 bytes, default 64 Jul 10 07:52:09.216046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 07:52:09.216057 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 10 07:52:09.216067 kernel: Initialise system trusted keyrings Jul 10 07:52:09.216077 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 07:52:09.216087 kernel: Key type asymmetric registered Jul 10 07:52:09.216096 kernel: Asymmetric key parser 'x509' registered Jul 10 07:52:09.216107 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 07:52:09.216117 kernel: io scheduler mq-deadline registered Jul 10 07:52:09.216126 kernel: io scheduler kyber registered Jul 10 07:52:09.216138 kernel: io scheduler bfq registered Jul 10 07:52:09.216148 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 07:52:09.216160 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 10 07:52:09.216170 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 10 07:52:09.216180 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 10 07:52:09.216190 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 10 07:52:09.216200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 07:52:09.216210 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 07:52:09.216220 kernel: random: crng init done Jul 10 07:52:09.216233 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 07:52:09.216243 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 07:52:09.216253 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 07:52:09.216448 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 07:52:09.216474 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 07:52:09.216570 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 07:52:09.216664 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T07:52:08 UTC (1752133928) Jul 10 07:52:09.216786 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 07:52:09.216807 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 07:52:09.216819 kernel: NET: Registered PF_INET6 protocol family Jul 10 07:52:09.216829 kernel: Segment Routing with IPv6 Jul 10 07:52:09.216838 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 07:52:09.216847 kernel: NET: Registered PF_PACKET protocol family Jul 10 07:52:09.216857 kernel: Key type dns_resolver registered Jul 10 07:52:09.216866 kernel: IPI shorthand broadcast: enabled Jul 10 07:52:09.216875 kernel: sched_clock: Marking stable (4379009328, 187944619)->(4621084954, -54131007) Jul 10 07:52:09.216884 kernel: registered taskstats version 1 Jul 10 07:52:09.216896 kernel: Loading compiled-in X.509 certificates Jul 10 07:52:09.216905 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b89e0dc22b3b76335f64d75ef999e68b43a7102' Jul 10 07:52:09.216914 kernel: Demotion targets for Node 0: null Jul 10 07:52:09.216923 kernel: Key type .fscrypt registered Jul 10 07:52:09.216932 kernel: Key type fscrypt-provisioning registered Jul 10 07:52:09.216942 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 07:52:09.216951 kernel: ima: Allocated hash algorithm: sha1 Jul 10 07:52:09.216960 kernel: ima: No architecture policies found Jul 10 07:52:09.216971 kernel: clk: Disabling unused clocks Jul 10 07:52:09.216980 kernel: Warning: unable to open an initial console. Jul 10 07:52:09.216990 kernel: Freeing unused kernel image (initmem) memory: 54600K Jul 10 07:52:09.217000 kernel: Write protecting the kernel read-only data: 24576k Jul 10 07:52:09.217009 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 07:52:09.217018 kernel: Run /init as init process Jul 10 07:52:09.217027 kernel: with arguments: Jul 10 07:52:09.217036 kernel: /init Jul 10 07:52:09.217045 kernel: with environment: Jul 10 07:52:09.217056 kernel: HOME=/ Jul 10 07:52:09.217065 kernel: TERM=linux Jul 10 07:52:09.217074 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 07:52:09.217110 systemd[1]: Successfully made /usr/ read-only. Jul 10 07:52:09.217125 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 07:52:09.217136 systemd[1]: Detected virtualization kvm. Jul 10 07:52:09.217146 systemd[1]: Detected architecture x86-64. Jul 10 07:52:09.217166 systemd[1]: Running in initrd. Jul 10 07:52:09.217177 systemd[1]: No hostname configured, using default hostname. Jul 10 07:52:09.217188 systemd[1]: Hostname set to . Jul 10 07:52:09.217198 systemd[1]: Initializing machine ID from VM UUID. Jul 10 07:52:09.217208 systemd[1]: Queued start job for default target initrd.target. Jul 10 07:52:09.217218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 07:52:09.217230 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 07:52:09.217241 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 07:52:09.217252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 07:52:09.217262 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 07:52:09.217273 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 07:52:09.217284 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 07:52:09.217294 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 07:52:09.217306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 07:52:09.217317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 07:52:09.217327 systemd[1]: Reached target paths.target - Path Units. Jul 10 07:52:09.217336 systemd[1]: Reached target slices.target - Slice Units. Jul 10 07:52:09.217346 systemd[1]: Reached target swap.target - Swaps. Jul 10 07:52:09.217356 systemd[1]: Reached target timers.target - Timer Units. Jul 10 07:52:09.217366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 07:52:09.217376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 07:52:09.217386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 07:52:09.217398 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 07:52:09.217409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 07:52:09.217419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 07:52:09.217429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 07:52:09.217439 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 07:52:09.217449 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 07:52:09.217459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 07:52:09.217470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 07:52:09.217482 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 07:52:09.217493 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 07:52:09.217504 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 07:52:09.217515 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 07:52:09.217525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 07:52:09.217536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 07:52:09.217594 systemd-journald[211]: Collecting audit messages is disabled. Jul 10 07:52:09.217626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 07:52:09.217636 kernel: Bridge firewalling registered Jul 10 07:52:09.217646 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 07:52:09.217658 systemd-journald[211]: Journal started Jul 10 07:52:09.217697 systemd-journald[211]: Runtime Journal (/run/log/journal/6320a6de3e5a4bb8a8ff0ec3784b0aeb) is 8M, max 78.5M, 70.5M free. Jul 10 07:52:09.220564 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 07:52:09.165647 systemd-modules-load[213]: Inserted module 'overlay' Jul 10 07:52:09.213050 systemd-modules-load[213]: Inserted module 'br_netfilter' Jul 10 07:52:09.227804 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 07:52:09.226864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 07:52:09.232779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 07:52:09.236098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 07:52:09.244948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 07:52:09.295275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 07:52:09.300364 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 07:52:09.337335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:09.339561 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 07:52:09.352476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 07:52:09.359007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 07:52:09.364024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 07:52:09.372100 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 07:52:09.392965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 07:52:09.398842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 07:52:09.404014 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 07:52:09.431696 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 07:52:09.438132 systemd-resolved[241]: Positive Trust Anchors: Jul 10 07:52:09.438882 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 07:52:09.438924 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 07:52:09.449416 systemd-resolved[241]: Defaulting to hostname 'linux'. Jul 10 07:52:09.452505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 07:52:09.453221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 07:52:09.534940 kernel: SCSI subsystem initialized Jul 10 07:52:09.545819 kernel: Loading iSCSI transport class v2.0-870. Jul 10 07:52:09.559890 kernel: iscsi: registered transport (tcp) Jul 10 07:52:09.584925 kernel: iscsi: registered transport (qla4xxx) Jul 10 07:52:09.585083 kernel: QLogic iSCSI HBA Driver Jul 10 07:52:09.628257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 07:52:09.642401 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 07:52:09.655154 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 07:52:09.757029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 07:52:09.763815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 07:52:09.874863 kernel: raid6: sse2x4 gen() 5679 MB/s Jul 10 07:52:09.892826 kernel: raid6: sse2x2 gen() 14632 MB/s Jul 10 07:52:09.911226 kernel: raid6: sse2x1 gen() 10126 MB/s Jul 10 07:52:09.911335 kernel: raid6: using algorithm sse2x2 gen() 14632 MB/s Jul 10 07:52:09.930385 kernel: raid6: .... xor() 9285 MB/s, rmw enabled Jul 10 07:52:09.930460 kernel: raid6: using ssse3x2 recovery algorithm Jul 10 07:52:09.955612 kernel: xor: measuring software checksum speed Jul 10 07:52:09.955690 kernel: prefetch64-sse : 18437 MB/sec Jul 10 07:52:09.955828 kernel: generic_sse : 15667 MB/sec Jul 10 07:52:09.958576 kernel: xor: using function: prefetch64-sse (18437 MB/sec) Jul 10 07:52:10.186264 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 07:52:10.196008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 07:52:10.199300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 07:52:10.243589 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jul 10 07:52:10.251519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 07:52:10.261167 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 07:52:10.285972 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jul 10 07:52:10.329574 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 07:52:10.335652 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 07:52:10.421452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 07:52:10.430233 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 07:52:10.515830 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 10 07:52:10.530116 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 10 07:52:10.552873 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 07:52:10.552943 kernel: GPT:17805311 != 20971519 Jul 10 07:52:10.552967 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 07:52:10.554505 kernel: GPT:17805311 != 20971519 Jul 10 07:52:10.555823 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 07:52:10.562320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 07:52:10.572088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 07:52:10.575105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:10.577362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 07:52:10.580830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 07:52:10.582149 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 07:52:10.588958 kernel: libata version 3.00 loaded. Jul 10 07:52:10.592225 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 10 07:52:10.597759 kernel: scsi host0: ata_piix Jul 10 07:52:10.606157 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 07:52:10.606225 kernel: scsi host1: ata_piix Jul 10 07:52:10.613947 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 10 07:52:10.614024 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 10 07:52:10.689317 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 07:52:10.712290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:10.731191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 07:52:10.741982 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 07:52:10.750701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 07:52:10.751347 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 07:52:10.755869 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 07:52:10.779546 disk-uuid[562]: Primary Header is updated. Jul 10 07:52:10.779546 disk-uuid[562]: Secondary Entries is updated. Jul 10 07:52:10.779546 disk-uuid[562]: Secondary Header is updated. Jul 10 07:52:10.783314 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 07:52:10.787231 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 07:52:10.792811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 07:52:10.790624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 07:52:10.791168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 07:52:10.795905 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 07:52:10.834414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 07:52:11.824882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 07:52:11.828541 disk-uuid[565]: The operation has completed successfully. Jul 10 07:52:11.910238 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 07:52:11.910380 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 07:52:11.958955 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 07:52:11.989950 sh[587]: Success Jul 10 07:52:12.042918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 07:52:12.043093 kernel: device-mapper: uevent: version 1.0.3 Jul 10 07:52:12.047601 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 07:52:12.066234 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 10 07:52:12.124891 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 07:52:12.129525 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 07:52:12.130859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 07:52:12.171784 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 07:52:12.175779 kernel: BTRFS: device fsid 511ba16f-9623-4757-a014-7759f3bcc596 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (600) Jul 10 07:52:12.179043 kernel: BTRFS info (device dm-0): first mount of filesystem 511ba16f-9623-4757-a014-7759f3bcc596 Jul 10 07:52:12.179077 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 07:52:12.180820 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 07:52:12.195140 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 07:52:12.199980 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 07:52:12.201721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 07:52:12.205046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 07:52:12.214054 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 07:52:12.264811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (632) Jul 10 07:52:12.271056 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 07:52:12.271107 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 07:52:12.271120 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 07:52:12.284824 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 07:52:12.287424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 07:52:12.292347 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 07:52:12.385605 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 07:52:12.391960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 07:52:12.461829 systemd-networkd[768]: lo: Link UP Jul 10 07:52:12.461843 systemd-networkd[768]: lo: Gained carrier Jul 10 07:52:12.465287 systemd-networkd[768]: Enumeration completed Jul 10 07:52:12.467393 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 07:52:12.467780 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 07:52:12.467787 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 07:52:12.468241 systemd[1]: Reached target network.target - Network. Jul 10 07:52:12.470590 systemd-networkd[768]: eth0: Link UP Jul 10 07:52:12.470596 systemd-networkd[768]: eth0: Gained carrier Jul 10 07:52:12.470609 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 07:52:12.489459 systemd-networkd[768]: eth0: DHCPv4 address 172.24.4.91/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 10 07:52:12.534241 ignition[697]: Ignition 2.21.0 Jul 10 07:52:12.534256 ignition[697]: Stage: fetch-offline Jul 10 07:52:12.534289 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:12.537424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 07:52:12.534298 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:12.534399 ignition[697]: parsed url from cmdline: "" Jul 10 07:52:12.534403 ignition[697]: no config URL provided Jul 10 07:52:12.534409 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 07:52:12.539882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 07:52:12.534417 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 10 07:52:12.534423 ignition[697]: failed to fetch config: resource requires networking Jul 10 07:52:12.535473 ignition[697]: Ignition finished successfully Jul 10 07:52:12.573414 ignition[781]: Ignition 2.21.0 Jul 10 07:52:12.573429 ignition[781]: Stage: fetch Jul 10 07:52:12.573786 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:12.573800 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:12.573913 ignition[781]: parsed url from cmdline: "" Jul 10 07:52:12.573918 ignition[781]: no config URL provided Jul 10 07:52:12.573928 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 07:52:12.573939 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jul 10 07:52:12.574240 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 10 07:52:12.575887 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 10 07:52:12.577336 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 10 07:52:12.776155 ignition[781]: GET result: OK Jul 10 07:52:12.776540 ignition[781]: parsing config with SHA512: cf3f119399d2cebed2f9ad090528b68f1bc79ae09ac2afca28b3dd62dd564e3dfa8485656c3bf202b9dc1ce47791b806063fbcf5a57b8e58421f27aa52dede13 Jul 10 07:52:12.793210 unknown[781]: fetched base config from "system" Jul 10 07:52:12.793881 unknown[781]: fetched base config from "system" Jul 10 07:52:12.794815 ignition[781]: fetch: fetch complete Jul 10 07:52:12.793903 unknown[781]: fetched user config from "openstack" Jul 10 07:52:12.794840 ignition[781]: fetch: fetch passed Jul 10 07:52:12.799868 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 07:52:12.795034 ignition[781]: Ignition finished successfully Jul 10 07:52:12.806017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 07:52:12.877468 ignition[787]: Ignition 2.21.0 Jul 10 07:52:12.877506 ignition[787]: Stage: kargs Jul 10 07:52:12.877986 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:12.884162 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 07:52:12.878017 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:12.880378 ignition[787]: kargs: kargs passed Jul 10 07:52:12.880519 ignition[787]: Ignition finished successfully Jul 10 07:52:12.892489 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 07:52:12.952443 ignition[793]: Ignition 2.21.0 Jul 10 07:52:12.952479 ignition[793]: Stage: disks Jul 10 07:52:12.952854 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:12.952879 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:12.954702 ignition[793]: disks: disks passed Jul 10 07:52:12.959502 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 07:52:12.954836 ignition[793]: Ignition finished successfully Jul 10 07:52:12.962856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 07:52:12.965057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 07:52:12.967867 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 07:52:12.970317 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 07:52:12.973389 systemd[1]: Reached target basic.target - Basic System. Jul 10 07:52:12.979117 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 07:52:13.029422 systemd-fsck[801]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 10 07:52:13.043546 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 07:52:13.048964 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 07:52:13.273792 kernel: EXT4-fs (vda9): mounted filesystem f2872d8e-bdd9-4186-89ae-300fdf795a28 r/w with ordered data mode. Quota mode: none. Jul 10 07:52:13.275184 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 07:52:13.276878 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 07:52:13.280262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 07:52:13.282884 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 07:52:13.287909 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 07:52:13.288658 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 10 07:52:13.293249 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 07:52:13.294144 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 07:52:13.301582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 07:52:13.305435 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 07:52:13.317806 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (809) Jul 10 07:52:13.330105 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 07:52:13.330177 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 07:52:13.330191 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 07:52:13.347158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 07:52:13.410922 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:13.440700 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 07:52:13.446938 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 10 07:52:13.452562 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 07:52:13.457780 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 07:52:13.621524 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 07:52:13.625514 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 07:52:13.629006 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 07:52:13.660099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 07:52:13.667640 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 07:52:13.685596 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 07:52:13.706473 ignition[927]: INFO : Ignition 2.21.0 Jul 10 07:52:13.706473 ignition[927]: INFO : Stage: mount Jul 10 07:52:13.706473 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:13.706473 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:13.712136 ignition[927]: INFO : mount: mount passed Jul 10 07:52:13.712796 ignition[927]: INFO : Ignition finished successfully Jul 10 07:52:13.714467 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 07:52:13.937410 systemd-networkd[768]: eth0: Gained IPv6LL Jul 10 07:52:14.453783 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:16.465793 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:20.478467 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:20.510217 coreos-metadata[811]: Jul 10 07:52:20.510 WARN failed to locate config-drive, using the metadata service API instead Jul 10 07:52:20.556968 coreos-metadata[811]: Jul 10 07:52:20.556 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 10 07:52:20.579557 coreos-metadata[811]: Jul 10 07:52:20.579 INFO Fetch successful Jul 10 07:52:20.581960 coreos-metadata[811]: Jul 10 07:52:20.581 INFO wrote hostname ci-4391-0-0-n-fdb14ef6d8.novalocal to /sysroot/etc/hostname Jul 10 07:52:20.591162 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 10 07:52:20.592379 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 10 07:52:20.605623 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 07:52:20.662879 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 07:52:20.706856 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (943) Jul 10 07:52:20.718601 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 07:52:20.718729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 07:52:20.721561 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 07:52:20.737161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 07:52:20.795795 ignition[961]: INFO : Ignition 2.21.0 Jul 10 07:52:20.795795 ignition[961]: INFO : Stage: files Jul 10 07:52:20.799173 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:20.799173 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:20.804694 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jul 10 07:52:20.804694 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 07:52:20.804694 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 07:52:20.811416 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 07:52:20.811416 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 07:52:20.815403 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 07:52:20.812964 unknown[961]: wrote ssh authorized keys file for user: core Jul 10 07:52:20.819882 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 07:52:20.822432 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 07:52:20.918397 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 07:52:21.388651 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 07:52:21.390489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 07:52:21.406009 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 07:52:22.084938 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 07:52:23.707180 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 07:52:23.709977 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 07:52:23.711898 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 07:52:23.720557 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 07:52:23.720557 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 07:52:23.720557 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 10 07:52:23.727002 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 07:52:23.727002 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 07:52:23.727002 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 07:52:23.727002 ignition[961]: INFO : files: files passed Jul 10 07:52:23.727002 ignition[961]: INFO : Ignition finished successfully Jul 10 07:52:23.728144 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 07:52:23.734995 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 07:52:23.740047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 07:52:23.764272 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 07:52:23.764592 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 07:52:23.775770 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 07:52:23.777141 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 07:52:23.778914 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 07:52:23.779536 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 07:52:23.782236 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 07:52:23.786947 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 07:52:23.850645 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 07:52:23.851021 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 07:52:23.854355 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 07:52:23.856919 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 07:52:23.860289 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 07:52:23.863865 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 07:52:23.923173 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 07:52:23.928930 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 07:52:23.983393 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 07:52:23.985216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 07:52:23.988439 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 07:52:23.991488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 07:52:23.992041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 07:52:23.994789 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 07:52:23.996715 systemd[1]: Stopped target basic.target - Basic System. Jul 10 07:52:23.999792 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 07:52:24.002440 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 07:52:24.005605 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 07:52:24.009116 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 07:52:24.010770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 07:52:24.013694 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 07:52:24.017110 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 07:52:24.020110 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 07:52:24.022924 systemd[1]: Stopped target swap.target - Swaps. Jul 10 07:52:24.025889 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 07:52:24.026366 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 07:52:24.029884 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 07:52:24.031702 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 07:52:24.034211 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 07:52:24.034642 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 07:52:24.036810 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 07:52:24.037303 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 07:52:24.040807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 07:52:24.041266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 07:52:24.044922 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 07:52:24.045228 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 07:52:24.051318 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 07:52:24.056279 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 07:52:24.058007 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 07:52:24.060110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 07:52:24.065968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 07:52:24.066300 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 07:52:24.076104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 07:52:24.076191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 07:52:24.092210 ignition[1015]: INFO : Ignition 2.21.0 Jul 10 07:52:24.093825 ignition[1015]: INFO : Stage: umount Jul 10 07:52:24.093825 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 07:52:24.093825 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 07:52:24.096320 ignition[1015]: INFO : umount: umount passed Jul 10 07:52:24.096320 ignition[1015]: INFO : Ignition finished successfully Jul 10 07:52:24.099500 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 07:52:24.100662 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 07:52:24.100841 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 07:52:24.102480 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 07:52:24.102589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 07:52:24.103233 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 07:52:24.103278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 07:52:24.104269 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 07:52:24.104335 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 07:52:24.105339 systemd[1]: Stopped target network.target - Network. Jul 10 07:52:24.106355 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 07:52:24.106416 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 07:52:24.107385 systemd[1]: Stopped target paths.target - Path Units. Jul 10 07:52:24.108323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 07:52:24.111850 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 07:52:24.112593 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 07:52:24.113800 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 07:52:24.114831 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 07:52:24.114884 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 07:52:24.115836 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 07:52:24.115880 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 07:52:24.116809 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 07:52:24.116862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 07:52:24.117806 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 07:52:24.117880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 07:52:24.118931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 07:52:24.120348 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 07:52:24.133093 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 07:52:24.133969 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 07:52:24.136685 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 07:52:24.137092 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 07:52:24.137344 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 07:52:24.141077 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 07:52:24.141969 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 07:52:24.142558 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 07:52:24.142607 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 07:52:24.144683 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 07:52:24.146036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 07:52:24.146116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 07:52:24.148230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 07:52:24.148316 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 07:52:24.150331 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 07:52:24.150407 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 07:52:24.150974 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 07:52:24.151042 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 07:52:24.153187 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 07:52:24.155014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 07:52:24.155091 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 07:52:24.161562 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 07:52:24.161961 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 07:52:24.163523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 07:52:24.163565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 07:52:24.165406 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 07:52:24.165438 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 07:52:24.166567 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 07:52:24.166621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 07:52:24.168559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 07:52:24.168605 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 07:52:24.169756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 07:52:24.169805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 07:52:24.171720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 07:52:24.172988 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 07:52:24.173040 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 07:52:24.175208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 07:52:24.175254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 07:52:24.177776 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 07:52:24.177853 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 07:52:24.181125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 07:52:24.181205 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 07:52:24.181828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 07:52:24.181881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:24.184395 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 07:52:24.184454 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 07:52:24.184526 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 07:52:24.184601 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 07:52:24.187154 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 07:52:24.187253 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 07:52:24.193852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 07:52:24.193972 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 07:52:24.351096 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 07:52:24.351376 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 07:52:24.355131 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 07:52:24.356831 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 07:52:24.356981 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 07:52:24.362009 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 07:52:24.446842 systemd[1]: Switching root. Jul 10 07:52:24.518022 systemd-journald[211]: Journal stopped Jul 10 07:52:26.218603 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jul 10 07:52:26.218675 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 07:52:26.218696 kernel: SELinux: policy capability open_perms=1 Jul 10 07:52:26.218709 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 07:52:26.218720 kernel: SELinux: policy capability always_check_network=0 Jul 10 07:52:26.218748 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 07:52:26.218761 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 07:52:26.218772 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 07:52:26.218783 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 07:52:26.218797 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 07:52:26.218809 kernel: audit: type=1403 audit(1752133945.097:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 07:52:26.218826 systemd[1]: Successfully loaded SELinux policy in 108.457ms. Jul 10 07:52:26.218846 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.331ms. Jul 10 07:52:26.218860 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 07:52:26.218873 systemd[1]: Detected virtualization kvm. Jul 10 07:52:26.218885 systemd[1]: Detected architecture x86-64. Jul 10 07:52:26.218898 systemd[1]: Detected first boot. Jul 10 07:52:26.218912 systemd[1]: Hostname set to . Jul 10 07:52:26.218924 systemd[1]: Initializing machine ID from VM UUID. Jul 10 07:52:26.218944 zram_generator::config[1059]: No configuration found. Jul 10 07:52:26.218957 kernel: Guest personality initialized and is inactive Jul 10 07:52:26.218973 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 07:52:26.218985 kernel: Initialized host personality Jul 10 07:52:26.218998 kernel: NET: Registered PF_VSOCK protocol family Jul 10 07:52:26.219009 systemd[1]: Populated /etc with preset unit settings. Jul 10 07:52:26.219023 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 07:52:26.219038 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 07:52:26.219050 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 07:52:26.219063 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 07:52:26.219075 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 07:52:26.219088 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 07:52:26.219100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 07:52:26.219112 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 07:52:26.219137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 07:52:26.219185 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 07:52:26.219231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 07:52:26.219273 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 07:52:26.219311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 07:52:26.219358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 07:52:26.219396 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 07:52:26.219425 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 07:52:26.219441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 07:52:26.219454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 07:52:26.219469 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 07:52:26.219481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 07:52:26.219494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 07:52:26.219506 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 07:52:26.219518 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 07:52:26.219530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 07:52:26.219544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 07:52:26.219556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 07:52:26.219569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 07:52:26.219580 systemd[1]: Reached target slices.target - Slice Units. Jul 10 07:52:26.219592 systemd[1]: Reached target swap.target - Swaps. Jul 10 07:52:26.219604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 07:52:26.219616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 07:52:26.219628 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 07:52:26.219640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 07:52:26.219652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 07:52:26.219666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 07:52:26.219680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 07:52:26.219693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 07:52:26.219705 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 07:52:26.219716 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 07:52:26.220122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:26.220140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 07:52:26.220153 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 07:52:26.220170 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 07:52:26.220184 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 07:52:26.220198 systemd[1]: Reached target machines.target - Containers. Jul 10 07:52:26.220215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 07:52:26.220228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 07:52:26.220241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 07:52:26.220254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 07:52:26.220267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 07:52:26.220281 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 07:52:26.220296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 07:52:26.220309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 07:52:26.220322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 07:52:26.220335 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 07:52:26.220348 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 07:52:26.220361 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 07:52:26.220377 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 07:52:26.220390 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 07:52:26.220404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 07:52:26.220417 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 07:52:26.220430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 07:52:26.220442 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 07:52:26.220455 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 07:52:26.220467 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 07:52:26.220481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 07:52:26.220494 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 07:52:26.220506 systemd[1]: Stopped verity-setup.service. Jul 10 07:52:26.220518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:26.220532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 07:52:26.220545 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 07:52:26.220557 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 07:52:26.220569 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 07:52:26.220582 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 07:52:26.220594 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 07:52:26.220605 kernel: loop: module loaded Jul 10 07:52:26.220617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 07:52:26.220629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 07:52:26.220647 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 07:52:26.220660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 07:52:26.220673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 07:52:26.220686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 07:52:26.220725 systemd-journald[1142]: Collecting audit messages is disabled. Jul 10 07:52:26.220804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 07:52:26.220818 kernel: fuse: init (API version 7.41) Jul 10 07:52:26.220835 systemd-journald[1142]: Journal started Jul 10 07:52:26.220866 systemd-journald[1142]: Runtime Journal (/run/log/journal/6320a6de3e5a4bb8a8ff0ec3784b0aeb) is 8M, max 78.5M, 70.5M free. Jul 10 07:52:25.870274 systemd[1]: Queued start job for default target multi-user.target. Jul 10 07:52:25.890955 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 07:52:25.891462 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 07:52:26.223767 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 07:52:26.225950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 07:52:26.226847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 07:52:26.228167 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 07:52:26.230067 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 07:52:26.239137 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 07:52:26.240147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 07:52:26.240340 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 07:52:26.241380 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 07:52:26.249710 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 07:52:26.252918 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 07:52:26.255854 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 07:52:26.256448 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 07:52:26.256484 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 07:52:26.259790 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 07:52:26.263857 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 07:52:26.265097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 07:52:26.273901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 07:52:26.279306 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 07:52:26.279995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 07:52:26.281839 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 07:52:26.282415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 07:52:26.285604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 07:52:26.288901 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 07:52:26.293909 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 07:52:26.298153 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 07:52:26.298958 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 07:52:26.333096 systemd-journald[1142]: Time spent on flushing to /var/log/journal/6320a6de3e5a4bb8a8ff0ec3784b0aeb is 88.362ms for 970 entries. Jul 10 07:52:26.333096 systemd-journald[1142]: System Journal (/var/log/journal/6320a6de3e5a4bb8a8ff0ec3784b0aeb) is 8M, max 584.8M, 576.8M free. Jul 10 07:52:26.463808 systemd-journald[1142]: Received client request to flush runtime journal. Jul 10 07:52:26.463873 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 07:52:26.463898 kernel: ACPI: bus type drm_connector registered Jul 10 07:52:26.340286 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 07:52:26.340583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 07:52:26.343860 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 07:52:26.349237 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 07:52:26.350646 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 07:52:26.355918 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 07:52:26.374908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 07:52:26.413447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 07:52:26.427673 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 10 07:52:26.427687 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 10 07:52:26.432634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 07:52:26.438850 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 07:52:26.469957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 07:52:26.482792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 07:52:26.486845 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 07:52:26.508508 kernel: loop1: detected capacity change from 0 to 114000 Jul 10 07:52:26.532185 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 07:52:26.535966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 07:52:26.557898 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 07:52:26.557919 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 10 07:52:26.563442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 07:52:26.574766 kernel: loop2: detected capacity change from 0 to 8 Jul 10 07:52:26.592760 kernel: loop3: detected capacity change from 0 to 146488 Jul 10 07:52:26.649896 kernel: loop4: detected capacity change from 0 to 221472 Jul 10 07:52:26.710757 kernel: loop5: detected capacity change from 0 to 114000 Jul 10 07:52:26.789768 kernel: loop6: detected capacity change from 0 to 8 Jul 10 07:52:26.805761 kernel: loop7: detected capacity change from 0 to 146488 Jul 10 07:52:26.865581 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 10 07:52:26.866408 (sd-merge)[1226]: Merged extensions into '/usr'. Jul 10 07:52:26.874193 systemd[1]: Reload requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 07:52:26.874210 systemd[1]: Reloading... Jul 10 07:52:26.969770 zram_generator::config[1249]: No configuration found. Jul 10 07:52:27.197793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 07:52:27.309896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 07:52:27.309972 systemd[1]: Reloading finished in 435 ms. Jul 10 07:52:27.321835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 07:52:27.335630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 07:52:27.341671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 07:52:27.344228 systemd[1]: Starting ensure-sysext.service... Jul 10 07:52:27.351856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 07:52:27.367195 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... Jul 10 07:52:27.367213 systemd[1]: Reloading... Jul 10 07:52:27.392074 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 07:52:27.392113 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 07:52:27.392379 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 07:52:27.392636 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 07:52:27.393438 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 07:52:27.393716 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 10 07:52:27.393801 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 10 07:52:27.394117 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jul 10 07:52:27.403116 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 07:52:27.403128 systemd-tmpfiles[1310]: Skipping /boot Jul 10 07:52:27.411526 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 07:52:27.411539 systemd-tmpfiles[1310]: Skipping /boot Jul 10 07:52:27.481538 zram_generator::config[1341]: No configuration found. Jul 10 07:52:27.541332 ldconfig[1185]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 07:52:27.761232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 07:52:27.841760 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 10 07:52:27.844774 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 07:52:27.851786 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 07:52:27.866772 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 07:52:27.875755 kernel: ACPI: button: Power Button [PWRF] Jul 10 07:52:27.921095 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 07:52:27.921289 systemd[1]: Reloading finished in 553 ms. Jul 10 07:52:27.935698 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 07:52:27.937755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 07:52:27.949565 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 07:52:27.999641 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:28.001333 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 07:52:28.004320 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 07:52:28.005119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 07:52:28.006390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 07:52:28.016981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 07:52:28.020002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 07:52:28.020725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 07:52:28.020884 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 07:52:28.033971 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 07:52:28.042440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 07:52:28.049877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 07:52:28.061185 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 07:52:28.063852 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:28.068150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 07:52:28.068408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 07:52:28.101920 systemd[1]: Finished ensure-sysext.service. Jul 10 07:52:28.110631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:28.111896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 07:52:28.123822 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 10 07:52:28.123892 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 10 07:52:28.127821 kernel: Console: switching to colour dummy device 80x25 Jul 10 07:52:28.127456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 07:52:28.135785 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 10 07:52:28.135862 kernel: [drm] features: -context_init Jul 10 07:52:28.135882 kernel: [drm] number of scanouts: 1 Jul 10 07:52:28.138581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 07:52:28.138849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 07:52:28.138888 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 07:52:28.143147 kernel: [drm] number of cap sets: 0 Jul 10 07:52:28.142276 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 07:52:28.144943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 07:52:28.147532 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 10 07:52:28.146810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 07:52:28.147252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 07:52:28.147476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 07:52:28.162587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 07:52:28.165819 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 07:52:28.180394 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 07:52:28.180800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 07:52:28.186174 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 07:52:28.186357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 07:52:28.194994 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 07:52:28.201666 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 07:52:28.201947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 07:52:28.208831 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 07:52:28.219349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 07:52:28.220015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:28.231953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 07:52:28.239503 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 07:52:28.240375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 07:52:28.255127 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 07:52:28.255492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 07:52:28.255651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 07:52:28.256693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 07:52:28.258885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 07:52:28.276688 augenrules[1498]: No rules Jul 10 07:52:28.277338 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 07:52:28.278393 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 07:52:28.281317 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 07:52:28.298850 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 07:52:28.352510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 07:52:28.405537 systemd-resolved[1453]: Positive Trust Anchors: Jul 10 07:52:28.405559 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 07:52:28.405604 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 07:52:28.413878 systemd-resolved[1453]: Using system hostname 'ci-4391-0-0-n-fdb14ef6d8.novalocal'. Jul 10 07:52:28.415914 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 07:52:28.416097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 07:52:28.417132 systemd-networkd[1452]: lo: Link UP Jul 10 07:52:28.417427 systemd-networkd[1452]: lo: Gained carrier Jul 10 07:52:28.419014 systemd-networkd[1452]: Enumeration completed Jul 10 07:52:28.419179 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 07:52:28.419628 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 07:52:28.419702 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 07:52:28.419773 systemd[1]: Reached target network.target - Network. Jul 10 07:52:28.420696 systemd-networkd[1452]: eth0: Link UP Jul 10 07:52:28.420961 systemd-networkd[1452]: eth0: Gained carrier Jul 10 07:52:28.421038 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 07:52:28.422777 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 07:52:28.423968 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 07:52:28.433446 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 07:52:28.433852 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 07:52:28.434101 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 07:52:28.434276 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 07:52:28.434344 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 07:52:28.434411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 07:52:28.434464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 07:52:28.434489 systemd[1]: Reached target paths.target - Path Units. Jul 10 07:52:28.434543 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 07:52:28.434725 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 07:52:28.434891 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 07:52:28.434951 systemd[1]: Reached target timers.target - Timer Units. Jul 10 07:52:28.436633 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 07:52:28.436819 systemd-networkd[1452]: eth0: DHCPv4 address 172.24.4.91/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 10 07:52:28.439026 systemd-timesyncd[1466]: Network configuration changed, trying to establish connection. Jul 10 07:52:28.439173 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 07:52:28.442629 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 07:52:28.443691 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 07:52:28.443830 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 07:52:28.451397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 07:52:28.451891 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 07:52:28.452837 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 07:52:28.453802 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 07:52:28.453881 systemd[1]: Reached target basic.target - Basic System. Jul 10 07:52:28.454005 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 07:52:28.454033 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 07:52:28.455065 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 07:52:28.458619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 07:52:28.460969 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 07:52:28.466030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 07:52:28.470009 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 07:52:28.471567 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 07:52:28.471691 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 07:52:28.475139 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 07:52:28.479240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 07:52:28.486079 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 07:52:28.492406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 07:52:28.502159 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:28.502258 jq[1526]: false Jul 10 07:52:28.502986 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 10 07:52:28.503223 oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 10 07:52:28.505395 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 07:52:28.508987 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 07:52:28.510036 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 07:52:28.510852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 07:52:28.511522 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 10 07:52:28.511584 oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 10 07:52:28.511664 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 07:52:28.511702 oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 07:52:28.511851 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 10 07:52:28.511896 oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 10 07:52:28.514906 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 07:52:28.521995 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 10 07:52:28.521995 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 07:52:28.521050 oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 10 07:52:28.521067 oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 07:52:28.522370 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 07:52:28.523760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 07:52:28.532835 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 07:52:28.533166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 07:52:28.535914 extend-filesystems[1527]: Found /dev/vda6 Jul 10 07:52:28.536905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 07:52:28.537322 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 07:52:28.537517 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 07:52:28.538934 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 07:52:28.539118 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 07:52:28.548609 extend-filesystems[1527]: Found /dev/vda9 Jul 10 07:52:28.554808 extend-filesystems[1527]: Checking size of /dev/vda9 Jul 10 07:52:28.567633 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 07:52:28.568202 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 07:52:28.571510 jq[1541]: true Jul 10 07:52:28.593844 extend-filesystems[1527]: Resized partition /dev/vda9 Jul 10 07:52:28.594486 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 07:52:28.602486 update_engine[1536]: I20250710 07:52:28.600247 1536 main.cc:92] Flatcar Update Engine starting Jul 10 07:52:28.615442 jq[1565]: true Jul 10 07:52:28.615794 extend-filesystems[1572]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 07:52:28.628054 tar[1549]: linux-amd64/helm Jul 10 07:52:28.650324 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 10 07:52:28.672895 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 10 07:52:29.115054 systemd-resolved[1453]: Clock change detected. Flushing caches. Jul 10 07:52:29.118411 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 07:52:29.160728 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 07:52:29.160728 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 07:52:29.160728 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 10 07:52:29.117523 dbus-daemon[1524]: [system] SELinux support is enabled Jul 10 07:52:29.185629 update_engine[1536]: I20250710 07:52:29.123035 1536 update_check_scheduler.cc:74] Next update check in 4m53s Jul 10 07:52:29.124148 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 07:52:29.124178 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 07:52:29.124333 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 07:52:29.124353 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 07:52:29.125855 systemd[1]: Started update-engine.service - Update Engine. Jul 10 07:52:29.130082 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 07:52:29.156195 systemd-logind[1534]: New seat seat0. Jul 10 07:52:29.158358 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 07:52:29.158379 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 07:52:29.158745 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 07:52:29.168572 systemd-timesyncd[1466]: Contacted time server 23.186.168.125:123 (0.flatcar.pool.ntp.org). Jul 10 07:52:29.168920 systemd-timesyncd[1466]: Initial clock synchronization to Thu 2025-07-10 07:52:29.114662 UTC. Jul 10 07:52:29.216639 extend-filesystems[1527]: Resized filesystem in /dev/vda9 Jul 10 07:52:29.196065 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 07:52:29.196334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 07:52:29.255996 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Jul 10 07:52:29.258732 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 07:52:29.271026 systemd[1]: Starting sshkeys.service... Jul 10 07:52:29.387605 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 07:52:29.392173 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 07:52:29.423233 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:29.481897 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 07:52:29.962659 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 07:52:29.994251 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:30.111385 containerd[1564]: time="2025-07-10T07:52:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 07:52:30.113736 containerd[1564]: time="2025-07-10T07:52:30.113627969Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 10 07:52:30.145095 containerd[1564]: time="2025-07-10T07:52:30.145032528Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.3µs" Jul 10 07:52:30.145095 containerd[1564]: time="2025-07-10T07:52:30.145081881Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 07:52:30.145095 containerd[1564]: time="2025-07-10T07:52:30.145105655Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 07:52:30.145390 containerd[1564]: time="2025-07-10T07:52:30.145359121Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 07:52:30.145390 containerd[1564]: time="2025-07-10T07:52:30.145386793Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 07:52:30.145450 containerd[1564]: time="2025-07-10T07:52:30.145426717Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145544 containerd[1564]: time="2025-07-10T07:52:30.145517878Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145544 containerd[1564]: time="2025-07-10T07:52:30.145540571Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145862 containerd[1564]: time="2025-07-10T07:52:30.145829843Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145862 containerd[1564]: time="2025-07-10T07:52:30.145857054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145920 containerd[1564]: time="2025-07-10T07:52:30.145872594Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 07:52:30.145920 containerd[1564]: time="2025-07-10T07:52:30.145884837Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 07:52:30.150062 containerd[1564]: time="2025-07-10T07:52:30.150031614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 07:52:30.150367 containerd[1564]: time="2025-07-10T07:52:30.150340273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 07:52:30.150418 containerd[1564]: time="2025-07-10T07:52:30.150388233Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 07:52:30.150418 containerd[1564]: time="2025-07-10T07:52:30.150402700Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 07:52:30.150473 containerd[1564]: time="2025-07-10T07:52:30.150436393Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 07:52:30.150756 containerd[1564]: time="2025-07-10T07:52:30.150728711Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 07:52:30.150892 containerd[1564]: time="2025-07-10T07:52:30.150829901Z" level=info msg="metadata content store policy set" policy=shared Jul 10 07:52:30.171181 containerd[1564]: time="2025-07-10T07:52:30.171105819Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 07:52:30.171181 containerd[1564]: time="2025-07-10T07:52:30.171175159Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171209743Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171229020Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171243998Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171257693Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171270898Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171287850Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 07:52:30.171306 containerd[1564]: time="2025-07-10T07:52:30.171302057Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171315622Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171328125Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171344075Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171501070Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171534683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171557045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171569258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171581160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171592892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171606768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171620203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171638478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171650500Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 07:52:30.171690 containerd[1564]: time="2025-07-10T07:52:30.171664156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 07:52:30.172028 containerd[1564]: time="2025-07-10T07:52:30.171741100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 07:52:30.172028 containerd[1564]: time="2025-07-10T07:52:30.171758443Z" level=info msg="Start snapshots syncer" Jul 10 07:52:30.172028 containerd[1564]: time="2025-07-10T07:52:30.171786866Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 07:52:30.174243 containerd[1564]: time="2025-07-10T07:52:30.174173853Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174264543Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174366404Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174528307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174581427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174598579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174612335Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174627103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174654614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174675333Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174702103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174715949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174729465Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174785680Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 07:52:30.175107 containerd[1564]: time="2025-07-10T07:52:30.174804796Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174816338Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174829432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174839481Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174854128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174867914Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174888333Z" level=info msg="runtime interface created" Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174894584Z" level=info msg="created NRI interface" Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174904022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174917868Z" level=info msg="Connect containerd service" Jul 10 07:52:30.175432 containerd[1564]: time="2025-07-10T07:52:30.174989262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 07:52:30.176191 containerd[1564]: time="2025-07-10T07:52:30.176001851Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 07:52:30.229296 tar[1549]: linux-amd64/LICENSE Jul 10 07:52:30.229296 tar[1549]: linux-amd64/README.md Jul 10 07:52:30.248645 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 07:52:30.577075 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:30.506146 systemd-networkd[1452]: eth0: Gained IPv6LL Jul 10 07:52:30.577510 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 07:52:30.576232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 07:52:30.578479 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 07:52:30.585669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:52:30.591246 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 07:52:30.660043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 07:52:30.665695 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 07:52:30.672903 systemd[1]: Started sshd@0-172.24.4.91:22-172.24.4.1:52294.service - OpenSSH per-connection server daemon (172.24.4.1:52294). Jul 10 07:52:30.697097 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 07:52:30.708348 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 07:52:30.708631 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 07:52:30.713751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 07:52:30.732948 containerd[1564]: time="2025-07-10T07:52:30.732903841Z" level=info msg="Start subscribing containerd event" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734138536Z" level=info msg="Start recovering state" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734340124Z" level=info msg="Start event monitor" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734368207Z" level=info msg="Start cni network conf syncer for default" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734378055Z" level=info msg="Start streaming server" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734399916Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734409935Z" level=info msg="runtime interface starting up..." Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734417099Z" level=info msg="starting plugins..." Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734432638Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734062884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734623125Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 07:52:30.735039 containerd[1564]: time="2025-07-10T07:52:30.734812681Z" level=info msg="containerd successfully booted in 0.624471s" Jul 10 07:52:30.735130 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 07:52:30.764759 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 07:52:30.769183 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 07:52:30.772538 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 07:52:30.772894 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 07:52:32.043537 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 52294 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:32.044686 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:32.074147 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 07:52:32.078733 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 07:52:32.100273 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:32.100913 systemd-logind[1534]: New session 1 of user core. Jul 10 07:52:32.151725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 07:52:32.162300 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 07:52:32.187497 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 07:52:32.197166 systemd-logind[1534]: New session c1 of user core. Jul 10 07:52:32.402536 systemd[1661]: Queued start job for default target default.target. Jul 10 07:52:32.411201 systemd[1661]: Created slice app.slice - User Application Slice. Jul 10 07:52:32.411691 systemd[1661]: Reached target paths.target - Paths. Jul 10 07:52:32.411747 systemd[1661]: Reached target timers.target - Timers. Jul 10 07:52:32.416081 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 07:52:32.428415 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 07:52:32.428544 systemd[1661]: Reached target sockets.target - Sockets. Jul 10 07:52:32.428588 systemd[1661]: Reached target basic.target - Basic System. Jul 10 07:52:32.428627 systemd[1661]: Reached target default.target - Main User Target. Jul 10 07:52:32.428655 systemd[1661]: Startup finished in 220ms. Jul 10 07:52:32.429710 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 07:52:32.437190 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 07:52:32.660092 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:32.790711 systemd[1]: Started sshd@1-172.24.4.91:22-172.24.4.1:52306.service - OpenSSH per-connection server daemon (172.24.4.1:52306). Jul 10 07:52:33.627097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:52:33.653182 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 07:52:34.646183 sshd[1673]: Accepted publickey for core from 172.24.4.1 port 52306 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:34.649494 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:34.659234 systemd-logind[1534]: New session 2 of user core. Jul 10 07:52:34.667220 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 07:52:35.293800 sshd[1686]: Connection closed by 172.24.4.1 port 52306 Jul 10 07:52:35.298553 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:35.313639 systemd[1]: sshd@1-172.24.4.91:22-172.24.4.1:52306.service: Deactivated successfully. Jul 10 07:52:35.318692 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 07:52:35.320709 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Jul 10 07:52:35.324153 systemd-logind[1534]: Removed session 2. Jul 10 07:52:35.326296 systemd[1]: Started sshd@2-172.24.4.91:22-172.24.4.1:58200.service - OpenSSH per-connection server daemon (172.24.4.1:58200). Jul 10 07:52:35.446768 kubelet[1681]: E0710 07:52:35.446699 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 07:52:35.449427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 07:52:35.449637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 07:52:35.450339 systemd[1]: kubelet.service: Consumed 2.702s CPU time, 265.4M memory peak. Jul 10 07:52:35.859166 login[1656]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 07:52:35.878051 login[1657]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 07:52:35.880065 systemd-logind[1534]: New session 3 of user core. Jul 10 07:52:35.893216 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 07:52:35.897871 systemd-logind[1534]: New session 4 of user core. Jul 10 07:52:35.902193 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 07:52:36.141024 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:36.160816 coreos-metadata[1523]: Jul 10 07:52:36.160 WARN failed to locate config-drive, using the metadata service API instead Jul 10 07:52:36.356697 coreos-metadata[1523]: Jul 10 07:52:36.356 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 10 07:52:36.593513 sshd[1694]: Accepted publickey for core from 172.24.4.1 port 58200 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:36.596520 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:36.609191 systemd-logind[1534]: New session 5 of user core. Jul 10 07:52:36.617405 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 07:52:36.702029 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 07:52:36.721308 coreos-metadata[1597]: Jul 10 07:52:36.721 WARN failed to locate config-drive, using the metadata service API instead Jul 10 07:52:36.764240 coreos-metadata[1597]: Jul 10 07:52:36.764 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 10 07:52:36.799907 coreos-metadata[1523]: Jul 10 07:52:36.799 INFO Fetch successful Jul 10 07:52:36.800578 coreos-metadata[1523]: Jul 10 07:52:36.800 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 10 07:52:36.823268 coreos-metadata[1523]: Jul 10 07:52:36.823 INFO Fetch successful Jul 10 07:52:36.823268 coreos-metadata[1523]: Jul 10 07:52:36.823 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 10 07:52:36.837897 coreos-metadata[1523]: Jul 10 07:52:36.837 INFO Fetch successful Jul 10 07:52:36.837897 coreos-metadata[1523]: Jul 10 07:52:36.837 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 10 07:52:36.850734 coreos-metadata[1523]: Jul 10 07:52:36.850 INFO Fetch successful Jul 10 07:52:36.850734 coreos-metadata[1523]: Jul 10 07:52:36.850 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 10 07:52:36.866386 coreos-metadata[1523]: Jul 10 07:52:36.866 INFO Fetch successful Jul 10 07:52:36.866386 coreos-metadata[1523]: Jul 10 07:52:36.866 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 10 07:52:36.877415 coreos-metadata[1523]: Jul 10 07:52:36.877 INFO Fetch successful Jul 10 07:52:36.932757 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 07:52:36.934607 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 07:52:37.185309 coreos-metadata[1597]: Jul 10 07:52:37.184 INFO Fetch successful Jul 10 07:52:37.185309 coreos-metadata[1597]: Jul 10 07:52:37.184 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 10 07:52:37.199476 coreos-metadata[1597]: Jul 10 07:52:37.199 INFO Fetch successful Jul 10 07:52:37.209124 unknown[1597]: wrote ssh authorized keys file for user: core Jul 10 07:52:37.259535 update-ssh-keys[1739]: Updated "/home/core/.ssh/authorized_keys" Jul 10 07:52:37.261578 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 07:52:37.266215 systemd[1]: Finished sshkeys.service. Jul 10 07:52:37.273493 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 07:52:37.274162 systemd[1]: Startup finished in 4.563s (kernel) + 16.289s (initrd) + 11.844s (userspace) = 32.698s. Jul 10 07:52:37.327193 sshd[1729]: Connection closed by 172.24.4.1 port 58200 Jul 10 07:52:37.328326 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:37.335207 systemd[1]: sshd@2-172.24.4.91:22-172.24.4.1:58200.service: Deactivated successfully. Jul 10 07:52:37.339318 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 07:52:37.343374 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Jul 10 07:52:37.346679 systemd-logind[1534]: Removed session 5. Jul 10 07:52:45.694691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 07:52:45.699814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:52:46.206355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:52:46.225566 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 07:52:46.320601 kubelet[1753]: E0710 07:52:46.320516 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 07:52:46.327701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 07:52:46.328135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 07:52:46.329349 systemd[1]: kubelet.service: Consumed 474ms CPU time, 110.8M memory peak. Jul 10 07:52:47.384899 systemd[1]: Started sshd@3-172.24.4.91:22-172.24.4.1:44214.service - OpenSSH per-connection server daemon (172.24.4.1:44214). Jul 10 07:52:48.812465 sshd[1761]: Accepted publickey for core from 172.24.4.1 port 44214 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:48.816916 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:48.836103 systemd-logind[1534]: New session 6 of user core. Jul 10 07:52:48.846335 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 07:52:49.556004 sshd[1764]: Connection closed by 172.24.4.1 port 44214 Jul 10 07:52:49.557126 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:49.573672 systemd[1]: sshd@3-172.24.4.91:22-172.24.4.1:44214.service: Deactivated successfully. Jul 10 07:52:49.578140 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 07:52:49.581389 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Jul 10 07:52:49.587342 systemd[1]: Started sshd@4-172.24.4.91:22-172.24.4.1:44226.service - OpenSSH per-connection server daemon (172.24.4.1:44226). Jul 10 07:52:49.590108 systemd-logind[1534]: Removed session 6. Jul 10 07:52:51.109081 sshd[1770]: Accepted publickey for core from 172.24.4.1 port 44226 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:51.112071 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:51.122921 systemd-logind[1534]: New session 7 of user core. Jul 10 07:52:51.135328 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 07:52:51.805016 sshd[1773]: Connection closed by 172.24.4.1 port 44226 Jul 10 07:52:51.806546 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:51.825425 systemd[1]: sshd@4-172.24.4.91:22-172.24.4.1:44226.service: Deactivated successfully. Jul 10 07:52:51.829443 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 07:52:51.832216 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Jul 10 07:52:51.839945 systemd[1]: Started sshd@5-172.24.4.91:22-172.24.4.1:44230.service - OpenSSH per-connection server daemon (172.24.4.1:44230). Jul 10 07:52:51.843465 systemd-logind[1534]: Removed session 7. Jul 10 07:52:53.131349 sshd[1779]: Accepted publickey for core from 172.24.4.1 port 44230 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:53.134715 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:53.147572 systemd-logind[1534]: New session 8 of user core. Jul 10 07:52:53.157302 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 07:52:53.824028 sshd[1782]: Connection closed by 172.24.4.1 port 44230 Jul 10 07:52:53.825252 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:53.837237 systemd[1]: sshd@5-172.24.4.91:22-172.24.4.1:44230.service: Deactivated successfully. Jul 10 07:52:53.841606 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 07:52:53.846780 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Jul 10 07:52:53.849786 systemd[1]: Started sshd@6-172.24.4.91:22-172.24.4.1:58582.service - OpenSSH per-connection server daemon (172.24.4.1:58582). Jul 10 07:52:53.852652 systemd-logind[1534]: Removed session 8. Jul 10 07:52:55.319923 sshd[1788]: Accepted publickey for core from 172.24.4.1 port 58582 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:55.323304 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:55.338509 systemd-logind[1534]: New session 9 of user core. Jul 10 07:52:55.352329 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 07:52:55.868212 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 07:52:55.868874 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 07:52:55.888381 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 10 07:52:56.099004 sshd[1791]: Connection closed by 172.24.4.1 port 58582 Jul 10 07:52:56.100433 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:56.119652 systemd[1]: sshd@6-172.24.4.91:22-172.24.4.1:58582.service: Deactivated successfully. Jul 10 07:52:56.123926 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 07:52:56.126404 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Jul 10 07:52:56.134556 systemd[1]: Started sshd@7-172.24.4.91:22-172.24.4.1:58590.service - OpenSSH per-connection server daemon (172.24.4.1:58590). Jul 10 07:52:56.137025 systemd-logind[1534]: Removed session 9. Jul 10 07:52:56.444619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 07:52:56.449073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:52:56.998630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:52:57.012269 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 07:52:57.295535 kubelet[1809]: E0710 07:52:57.295196 1809 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 07:52:57.300922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 07:52:57.301355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 07:52:57.302554 systemd[1]: kubelet.service: Consumed 581ms CPU time, 109.5M memory peak. Jul 10 07:52:57.322432 sshd[1798]: Accepted publickey for core from 172.24.4.1 port 58590 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:57.325413 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:57.339080 systemd-logind[1534]: New session 10 of user core. Jul 10 07:52:57.347312 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 07:52:57.792450 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 07:52:57.793161 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 07:52:57.810350 sudo[1818]: pam_unix(sudo:session): session closed for user root Jul 10 07:52:57.825886 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 07:52:57.826629 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 07:52:57.849938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 07:52:57.947720 augenrules[1840]: No rules Jul 10 07:52:57.948954 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 07:52:57.949554 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 07:52:57.951626 sudo[1817]: pam_unix(sudo:session): session closed for user root Jul 10 07:52:58.096462 sshd[1816]: Connection closed by 172.24.4.1 port 58590 Jul 10 07:52:58.099143 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jul 10 07:52:58.115871 systemd[1]: sshd@7-172.24.4.91:22-172.24.4.1:58590.service: Deactivated successfully. Jul 10 07:52:58.119837 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 07:52:58.122260 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Jul 10 07:52:58.129370 systemd[1]: Started sshd@8-172.24.4.91:22-172.24.4.1:58592.service - OpenSSH per-connection server daemon (172.24.4.1:58592). Jul 10 07:52:58.132403 systemd-logind[1534]: Removed session 10. Jul 10 07:52:59.888413 sshd[1849]: Accepted publickey for core from 172.24.4.1 port 58592 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:52:59.891752 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:52:59.906082 systemd-logind[1534]: New session 11 of user core. Jul 10 07:52:59.918491 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 07:53:00.325195 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 07:53:00.325856 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 07:53:01.147484 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 07:53:01.184473 (dockerd)[1870]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 07:53:02.087694 dockerd[1870]: time="2025-07-10T07:53:02.087569705Z" level=info msg="Starting up" Jul 10 07:53:02.089023 dockerd[1870]: time="2025-07-10T07:53:02.088920698Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 07:53:02.139774 dockerd[1870]: time="2025-07-10T07:53:02.139560354Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 10 07:53:02.202720 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3799502556-merged.mount: Deactivated successfully. Jul 10 07:53:02.219460 systemd[1]: var-lib-docker-metacopy\x2dcheck1875682249-merged.mount: Deactivated successfully. Jul 10 07:53:02.258144 dockerd[1870]: time="2025-07-10T07:53:02.258075073Z" level=info msg="Loading containers: start." Jul 10 07:53:02.302060 kernel: Initializing XFRM netlink socket Jul 10 07:53:02.912149 systemd-networkd[1452]: docker0: Link UP Jul 10 07:53:02.918182 dockerd[1870]: time="2025-07-10T07:53:02.918062219Z" level=info msg="Loading containers: done." Jul 10 07:53:02.948769 dockerd[1870]: time="2025-07-10T07:53:02.948665174Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 07:53:02.949174 dockerd[1870]: time="2025-07-10T07:53:02.949107497Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 10 07:53:02.949453 dockerd[1870]: time="2025-07-10T07:53:02.949412067Z" level=info msg="Initializing buildkit" Jul 10 07:53:03.045510 dockerd[1870]: time="2025-07-10T07:53:03.045320659Z" level=info msg="Completed buildkit initialization" Jul 10 07:53:03.071507 dockerd[1870]: time="2025-07-10T07:53:03.071408348Z" level=info msg="Daemon has completed initialization" Jul 10 07:53:03.071693 dockerd[1870]: time="2025-07-10T07:53:03.071569084Z" level=info msg="API listen on /run/docker.sock" Jul 10 07:53:03.072565 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 07:53:03.189004 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck963438428-merged.mount: Deactivated successfully. Jul 10 07:53:05.008948 containerd[1564]: time="2025-07-10T07:53:05.008421870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 07:53:05.840549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592520063.mount: Deactivated successfully. Jul 10 07:53:07.444013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 07:53:07.450198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:07.699884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:07.721540 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 07:53:08.021563 kubelet[2146]: E0710 07:53:08.018710 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 07:53:08.028423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 07:53:08.028689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 07:53:08.030641 systemd[1]: kubelet.service: Consumed 331ms CPU time, 108.7M memory peak. Jul 10 07:53:08.453807 containerd[1564]: time="2025-07-10T07:53:08.453546623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:08.457597 containerd[1564]: time="2025-07-10T07:53:08.457181723Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 10 07:53:08.466067 containerd[1564]: time="2025-07-10T07:53:08.464911502Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:08.483240 containerd[1564]: time="2025-07-10T07:53:08.482933558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:08.490758 containerd[1564]: time="2025-07-10T07:53:08.490556465Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.481616681s" Jul 10 07:53:08.491092 containerd[1564]: time="2025-07-10T07:53:08.490778867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 07:53:08.505855 containerd[1564]: time="2025-07-10T07:53:08.505709754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 07:53:11.124385 containerd[1564]: time="2025-07-10T07:53:11.124233022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:11.126716 containerd[1564]: time="2025-07-10T07:53:11.126669101Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 10 07:53:11.127998 containerd[1564]: time="2025-07-10T07:53:11.127906011Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:11.135796 containerd[1564]: time="2025-07-10T07:53:11.135719362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:11.137473 containerd[1564]: time="2025-07-10T07:53:11.136982782Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.631129025s" Jul 10 07:53:11.137473 containerd[1564]: time="2025-07-10T07:53:11.137059527Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 07:53:11.138544 containerd[1564]: time="2025-07-10T07:53:11.138512665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 07:53:13.206016 containerd[1564]: time="2025-07-10T07:53:13.205663999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:13.208491 containerd[1564]: time="2025-07-10T07:53:13.208450475Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 10 07:53:13.209784 containerd[1564]: time="2025-07-10T07:53:13.209735904Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:13.219483 containerd[1564]: time="2025-07-10T07:53:13.219335625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:13.223683 containerd[1564]: time="2025-07-10T07:53:13.223288535Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.084732658s" Jul 10 07:53:13.223683 containerd[1564]: time="2025-07-10T07:53:13.223350783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 07:53:13.225258 containerd[1564]: time="2025-07-10T07:53:13.225181342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 07:53:14.588760 update_engine[1536]: I20250710 07:53:14.588407 1536 update_attempter.cc:509] Updating boot flags... Jul 10 07:53:15.205759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659825348.mount: Deactivated successfully. Jul 10 07:53:16.110203 containerd[1564]: time="2025-07-10T07:53:16.109646597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:16.114338 containerd[1564]: time="2025-07-10T07:53:16.113433474Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 10 07:53:16.115159 containerd[1564]: time="2025-07-10T07:53:16.115072317Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:16.119899 containerd[1564]: time="2025-07-10T07:53:16.118636633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:16.119899 containerd[1564]: time="2025-07-10T07:53:16.119338098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.893721043s" Jul 10 07:53:16.119899 containerd[1564]: time="2025-07-10T07:53:16.119405726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 07:53:16.121628 containerd[1564]: time="2025-07-10T07:53:16.121606640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 07:53:16.830717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677510198.mount: Deactivated successfully. Jul 10 07:53:18.198080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 07:53:18.207676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:18.792133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:18.798306 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 07:53:18.870105 containerd[1564]: time="2025-07-10T07:53:18.868575037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:18.871665 containerd[1564]: time="2025-07-10T07:53:18.871606985Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 10 07:53:18.873104 containerd[1564]: time="2025-07-10T07:53:18.873068311Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:18.876757 containerd[1564]: time="2025-07-10T07:53:18.876688157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:18.878596 containerd[1564]: time="2025-07-10T07:53:18.878039686Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.756310055s" Jul 10 07:53:18.878596 containerd[1564]: time="2025-07-10T07:53:18.878127952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 07:53:18.882661 containerd[1564]: time="2025-07-10T07:53:18.882631676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 07:53:18.899047 kubelet[2247]: E0710 07:53:18.898929 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 07:53:18.902246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 07:53:18.902403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 07:53:18.902806 systemd[1]: kubelet.service: Consumed 595ms CPU time, 108.6M memory peak. Jul 10 07:53:19.474286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880380893.mount: Deactivated successfully. Jul 10 07:53:19.488031 containerd[1564]: time="2025-07-10T07:53:19.487843009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 07:53:19.490398 containerd[1564]: time="2025-07-10T07:53:19.489924793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 07:53:19.492118 containerd[1564]: time="2025-07-10T07:53:19.491933851Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 07:53:19.500088 containerd[1564]: time="2025-07-10T07:53:19.499913083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 07:53:19.502559 containerd[1564]: time="2025-07-10T07:53:19.502034793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 619.227435ms" Jul 10 07:53:19.502559 containerd[1564]: time="2025-07-10T07:53:19.502187592Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 07:53:19.505934 containerd[1564]: time="2025-07-10T07:53:19.505867608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 07:53:20.207898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078937918.mount: Deactivated successfully. Jul 10 07:53:24.286744 containerd[1564]: time="2025-07-10T07:53:24.286295327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:24.290215 containerd[1564]: time="2025-07-10T07:53:24.290173389Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 10 07:53:24.292440 containerd[1564]: time="2025-07-10T07:53:24.292311223Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:24.297455 containerd[1564]: time="2025-07-10T07:53:24.297386487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:24.299980 containerd[1564]: time="2025-07-10T07:53:24.299728977Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.793796176s" Jul 10 07:53:24.299980 containerd[1564]: time="2025-07-10T07:53:24.299829186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 07:53:28.542793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:28.545332 systemd[1]: kubelet.service: Consumed 595ms CPU time, 108.6M memory peak. Jul 10 07:53:28.552100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:28.593802 systemd[1]: Reload requested from client PID 2339 ('systemctl') (unit session-11.scope)... Jul 10 07:53:28.594073 systemd[1]: Reloading... Jul 10 07:53:28.767006 zram_generator::config[2381]: No configuration found. Jul 10 07:53:28.910222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 07:53:29.064708 systemd[1]: Reloading finished in 470 ms. Jul 10 07:53:29.137074 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 07:53:29.137171 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 07:53:29.137868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:29.137917 systemd[1]: kubelet.service: Consumed 346ms CPU time, 98.3M memory peak. Jul 10 07:53:29.140478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:30.186684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:30.206730 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 07:53:30.286030 kubelet[2451]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 07:53:30.286030 kubelet[2451]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 07:53:30.286030 kubelet[2451]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 07:53:30.286030 kubelet[2451]: I0710 07:53:30.285573 2451 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 07:53:30.973031 kubelet[2451]: I0710 07:53:30.972208 2451 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 07:53:30.973031 kubelet[2451]: I0710 07:53:30.972328 2451 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 07:53:30.973429 kubelet[2451]: I0710 07:53:30.972950 2451 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 07:53:31.033292 kubelet[2451]: E0710 07:53:31.033210 2451 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:31.033623 kubelet[2451]: I0710 07:53:31.033385 2451 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 07:53:31.058699 kubelet[2451]: I0710 07:53:31.058651 2451 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 07:53:31.066259 kubelet[2451]: I0710 07:53:31.066175 2451 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 07:53:31.066475 kubelet[2451]: I0710 07:53:31.066370 2451 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 07:53:31.066567 kubelet[2451]: I0710 07:53:31.066526 2451 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 07:53:31.066866 kubelet[2451]: I0710 07:53:31.066564 2451 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4391-0-0-n-fdb14ef6d8.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 07:53:31.066866 kubelet[2451]: I0710 07:53:31.066862 2451 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 07:53:31.066866 kubelet[2451]: I0710 07:53:31.066878 2451 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 07:53:31.067672 kubelet[2451]: I0710 07:53:31.067101 2451 state_mem.go:36] "Initialized new in-memory state store" Jul 10 07:53:31.071691 kubelet[2451]: I0710 07:53:31.071582 2451 kubelet.go:408] "Attempting to sync node with API server" Jul 10 07:53:31.071691 kubelet[2451]: I0710 07:53:31.071626 2451 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 07:53:31.071691 kubelet[2451]: I0710 07:53:31.071689 2451 kubelet.go:314] "Adding apiserver pod source" Jul 10 07:53:31.072099 kubelet[2451]: I0710 07:53:31.071758 2451 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 07:53:31.083024 kubelet[2451]: W0710 07:53:31.082258 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-fdb14ef6d8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:31.083024 kubelet[2451]: E0710 07:53:31.082363 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-fdb14ef6d8.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:31.084379 kubelet[2451]: W0710 07:53:31.084275 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:31.084546 kubelet[2451]: E0710 07:53:31.084395 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:31.084672 kubelet[2451]: I0710 07:53:31.084618 2451 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 07:53:31.086156 kubelet[2451]: I0710 07:53:31.086070 2451 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 07:53:31.086359 kubelet[2451]: W0710 07:53:31.086316 2451 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 07:53:31.095016 kubelet[2451]: I0710 07:53:31.094621 2451 server.go:1274] "Started kubelet" Jul 10 07:53:31.099610 kubelet[2451]: I0710 07:53:31.099407 2451 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 07:53:31.107137 kubelet[2451]: I0710 07:53:31.107067 2451 server.go:449] "Adding debug handlers to kubelet server" Jul 10 07:53:31.110015 kubelet[2451]: I0710 07:53:31.109487 2451 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 07:53:31.110738 kubelet[2451]: I0710 07:53:31.110697 2451 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 07:53:31.115439 kubelet[2451]: E0710 07:53:31.111504 2451 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.91:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4391-0-0-n-fdb14ef6d8.novalocal.1850d499922b95bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4391-0-0-n-fdb14ef6d8.novalocal,UID:ci-4391-0-0-n-fdb14ef6d8.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-fdb14ef6d8.novalocal,},FirstTimestamp:2025-07-10 07:53:31.094537659 +0000 UTC m=+0.872413856,LastTimestamp:2025-07-10 07:53:31.094537659 +0000 UTC m=+0.872413856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-fdb14ef6d8.novalocal,}" Jul 10 07:53:31.122222 kubelet[2451]: I0710 07:53:31.122147 2451 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 07:53:31.124058 kubelet[2451]: E0710 07:53:31.123758 2451 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 07:53:31.124058 kubelet[2451]: I0710 07:53:31.123999 2451 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 07:53:31.127364 kubelet[2451]: I0710 07:53:31.126927 2451 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 07:53:31.127364 kubelet[2451]: I0710 07:53:31.127375 2451 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 07:53:31.127656 kubelet[2451]: I0710 07:53:31.127499 2451 reconciler.go:26] "Reconciler: start to sync state" Jul 10 07:53:31.128185 kubelet[2451]: W0710 07:53:31.128068 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:31.128185 kubelet[2451]: E0710 07:53:31.128157 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:31.128619 kubelet[2451]: E0710 07:53:31.128552 2451 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" not found" Jul 10 07:53:31.128736 kubelet[2451]: E0710 07:53:31.128665 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-fdb14ef6d8.novalocal?timeout=10s\": dial tcp 172.24.4.91:6443: connect: connection refused" interval="200ms" Jul 10 07:53:31.132033 kubelet[2451]: I0710 07:53:31.131553 2451 factory.go:221] Registration of the systemd container factory successfully Jul 10 07:53:31.132033 kubelet[2451]: I0710 07:53:31.131810 2451 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 07:53:31.138147 kubelet[2451]: I0710 07:53:31.138116 2451 factory.go:221] Registration of the containerd container factory successfully Jul 10 07:53:31.156345 kubelet[2451]: I0710 07:53:31.156249 2451 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 07:53:31.157094 kubelet[2451]: I0710 07:53:31.157062 2451 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 07:53:31.157159 kubelet[2451]: I0710 07:53:31.157129 2451 state_mem.go:36] "Initialized new in-memory state store" Jul 10 07:53:31.164148 kubelet[2451]: I0710 07:53:31.162922 2451 policy_none.go:49] "None policy: Start" Jul 10 07:53:31.166576 kubelet[2451]: I0710 07:53:31.166547 2451 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 07:53:31.166652 kubelet[2451]: I0710 07:53:31.166598 2451 state_mem.go:35] "Initializing new in-memory state store" Jul 10 07:53:31.170742 kubelet[2451]: I0710 07:53:31.170693 2451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 07:53:31.172430 kubelet[2451]: I0710 07:53:31.172406 2451 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 07:53:31.172597 kubelet[2451]: I0710 07:53:31.172583 2451 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 07:53:31.172720 kubelet[2451]: I0710 07:53:31.172707 2451 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 07:53:31.172862 kubelet[2451]: E0710 07:53:31.172839 2451 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 07:53:31.175028 kubelet[2451]: W0710 07:53:31.174906 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:31.175297 kubelet[2451]: E0710 07:53:31.175246 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:31.183185 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 07:53:31.197502 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 07:53:31.201821 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 07:53:31.212780 kubelet[2451]: I0710 07:53:31.212316 2451 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 07:53:31.212780 kubelet[2451]: I0710 07:53:31.212591 2451 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 07:53:31.212780 kubelet[2451]: I0710 07:53:31.212631 2451 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 07:53:31.215938 kubelet[2451]: I0710 07:53:31.215907 2451 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 07:53:31.220323 kubelet[2451]: E0710 07:53:31.220240 2451 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" not found" Jul 10 07:53:31.296083 systemd[1]: Created slice kubepods-burstable-pod4f2126965e94e01c570d47260dd27edb.slice - libcontainer container kubepods-burstable-pod4f2126965e94e01c570d47260dd27edb.slice. Jul 10 07:53:31.309071 systemd[1]: Created slice kubepods-burstable-pod3c4cd7c110ee20037243a111b3cbe07f.slice - libcontainer container kubepods-burstable-pod3c4cd7c110ee20037243a111b3cbe07f.slice. Jul 10 07:53:31.315042 kubelet[2451]: I0710 07:53:31.314935 2451 kubelet_node_status.go:72] "Attempting to register node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.315949 kubelet[2451]: E0710 07:53:31.315689 2451 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.91:6443/api/v1/nodes\": dial tcp 172.24.4.91:6443: connect: connection refused" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.329236 systemd[1]: Created slice kubepods-burstable-podcc6936fdb591d50d46c2c5c4d5c57ff4.slice - libcontainer container kubepods-burstable-podcc6936fdb591d50d46c2c5c4d5c57ff4.slice. Jul 10 07:53:31.331031 kubelet[2451]: I0710 07:53:31.330851 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-k8s-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.331031 kubelet[2451]: I0710 07:53:31.331012 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-kubeconfig\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.332373 kubelet[2451]: I0710 07:53:31.331100 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333110 kubelet[2451]: I0710 07:53:31.333048 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc6936fdb591d50d46c2c5c4d5c57ff4-kubeconfig\") pod \"kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"cc6936fdb591d50d46c2c5c4d5c57ff4\") " pod="kube-system/kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333267 kubelet[2451]: I0710 07:53:31.333135 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-ca-certs\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333267 kubelet[2451]: I0710 07:53:31.333180 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-ca-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333267 kubelet[2451]: I0710 07:53:31.333220 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-flexvolume-dir\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333267 kubelet[2451]: I0710 07:53:31.333259 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-k8s-certs\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.333591 kubelet[2451]: I0710 07:53:31.333302 2451 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.334528 kubelet[2451]: E0710 07:53:31.334455 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-fdb14ef6d8.novalocal?timeout=10s\": dial tcp 172.24.4.91:6443: connect: connection refused" interval="400ms" Jul 10 07:53:31.520758 kubelet[2451]: I0710 07:53:31.520678 2451 kubelet_node_status.go:72] "Attempting to register node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.521805 kubelet[2451]: E0710 07:53:31.521705 2451 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.91:6443/api/v1/nodes\": dial tcp 172.24.4.91:6443: connect: connection refused" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.610549 containerd[1564]: time="2025-07-10T07:53:31.610149574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:4f2126965e94e01c570d47260dd27edb,Namespace:kube-system,Attempt:0,}" Jul 10 07:53:31.623739 containerd[1564]: time="2025-07-10T07:53:31.623478562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:3c4cd7c110ee20037243a111b3cbe07f,Namespace:kube-system,Attempt:0,}" Jul 10 07:53:31.642794 containerd[1564]: time="2025-07-10T07:53:31.642302431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:cc6936fdb591d50d46c2c5c4d5c57ff4,Namespace:kube-system,Attempt:0,}" Jul 10 07:53:31.736421 kubelet[2451]: E0710 07:53:31.736343 2451 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-fdb14ef6d8.novalocal?timeout=10s\": dial tcp 172.24.4.91:6443: connect: connection refused" interval="800ms" Jul 10 07:53:31.742316 containerd[1564]: time="2025-07-10T07:53:31.742246223Z" level=info msg="connecting to shim 8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25" address="unix:///run/containerd/s/ebb30202d8383d416ce7df21006771cd3125ae2d5e6d025ad5206a36194a32c7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:31.770088 containerd[1564]: time="2025-07-10T07:53:31.764923595Z" level=info msg="connecting to shim dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51" address="unix:///run/containerd/s/32943014320683b844bcb0e917d1bf843b89344e2973122fb873f8bfaebeb227" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:31.801415 containerd[1564]: time="2025-07-10T07:53:31.801331291Z" level=info msg="connecting to shim b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20" address="unix:///run/containerd/s/6d10c40bde76f11649503de9249fab43e1404541a91c44a32f66a721dc86e089" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:31.807309 systemd[1]: Started cri-containerd-8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25.scope - libcontainer container 8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25. Jul 10 07:53:31.818635 systemd[1]: Started cri-containerd-dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51.scope - libcontainer container dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51. Jul 10 07:53:31.850168 systemd[1]: Started cri-containerd-b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20.scope - libcontainer container b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20. Jul 10 07:53:31.908235 containerd[1564]: time="2025-07-10T07:53:31.908109381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:4f2126965e94e01c570d47260dd27edb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25\"" Jul 10 07:53:31.915254 containerd[1564]: time="2025-07-10T07:53:31.914191144Z" level=info msg="CreateContainer within sandbox \"8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 07:53:31.925848 kubelet[2451]: I0710 07:53:31.925803 2451 kubelet_node_status.go:72] "Attempting to register node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.926619 kubelet[2451]: E0710 07:53:31.926582 2451 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.91:6443/api/v1/nodes\": dial tcp 172.24.4.91:6443: connect: connection refused" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:31.935687 containerd[1564]: time="2025-07-10T07:53:31.935629327Z" level=info msg="Container 7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:53:31.937328 containerd[1564]: time="2025-07-10T07:53:31.937267186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:3c4cd7c110ee20037243a111b3cbe07f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51\"" Jul 10 07:53:31.939077 containerd[1564]: time="2025-07-10T07:53:31.939000315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal,Uid:cc6936fdb591d50d46c2c5c4d5c57ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20\"" Jul 10 07:53:31.943468 containerd[1564]: time="2025-07-10T07:53:31.943435232Z" level=info msg="CreateContainer within sandbox \"dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 07:53:31.944528 containerd[1564]: time="2025-07-10T07:53:31.944319545Z" level=info msg="CreateContainer within sandbox \"b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 07:53:31.952558 containerd[1564]: time="2025-07-10T07:53:31.952438027Z" level=info msg="CreateContainer within sandbox \"8a08639bfddb1722aa6d28fc677b7bc830af0f96cda258bfd54f23b49545ca25\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2\"" Jul 10 07:53:31.957425 containerd[1564]: time="2025-07-10T07:53:31.953329142Z" level=info msg="StartContainer for \"7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2\"" Jul 10 07:53:31.958775 containerd[1564]: time="2025-07-10T07:53:31.958741859Z" level=info msg="connecting to shim 7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2" address="unix:///run/containerd/s/ebb30202d8383d416ce7df21006771cd3125ae2d5e6d025ad5206a36194a32c7" protocol=ttrpc version=3 Jul 10 07:53:31.972133 containerd[1564]: time="2025-07-10T07:53:31.972073080Z" level=info msg="Container 64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:53:31.985339 containerd[1564]: time="2025-07-10T07:53:31.984982370Z" level=info msg="Container 8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:53:31.989392 systemd[1]: Started cri-containerd-7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2.scope - libcontainer container 7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2. Jul 10 07:53:31.995730 containerd[1564]: time="2025-07-10T07:53:31.995677146Z" level=info msg="CreateContainer within sandbox \"dc9eb76669a8493b370669d932529dc5be0762b018d76e8188bc04725ee7dd51\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd\"" Jul 10 07:53:31.998692 containerd[1564]: time="2025-07-10T07:53:31.998649163Z" level=info msg="StartContainer for \"64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd\"" Jul 10 07:53:32.001344 containerd[1564]: time="2025-07-10T07:53:32.001281942Z" level=info msg="connecting to shim 64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd" address="unix:///run/containerd/s/32943014320683b844bcb0e917d1bf843b89344e2973122fb873f8bfaebeb227" protocol=ttrpc version=3 Jul 10 07:53:32.003953 containerd[1564]: time="2025-07-10T07:53:32.003889684Z" level=info msg="CreateContainer within sandbox \"b6497c403f245f4b7ff91b81583b4f0356759f58ba00c6fb8c5b3a38acd5bb20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d\"" Jul 10 07:53:32.007336 containerd[1564]: time="2025-07-10T07:53:32.007138752Z" level=info msg="StartContainer for \"8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d\"" Jul 10 07:53:32.017397 containerd[1564]: time="2025-07-10T07:53:32.014363943Z" level=info msg="connecting to shim 8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d" address="unix:///run/containerd/s/6d10c40bde76f11649503de9249fab43e1404541a91c44a32f66a721dc86e089" protocol=ttrpc version=3 Jul 10 07:53:32.037647 systemd[1]: Started cri-containerd-64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd.scope - libcontainer container 64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd. Jul 10 07:53:32.053183 systemd[1]: Started cri-containerd-8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d.scope - libcontainer container 8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d. Jul 10 07:53:32.071347 kubelet[2451]: W0710 07:53:32.071221 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-fdb14ef6d8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:32.071906 kubelet[2451]: E0710 07:53:32.071845 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-fdb14ef6d8.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:32.110051 containerd[1564]: time="2025-07-10T07:53:32.108931009Z" level=info msg="StartContainer for \"7783921bf1bb6d6a0a977a4b0ba7de21665abe446507d48a0abf3267601b56e2\" returns successfully" Jul 10 07:53:32.159282 kubelet[2451]: W0710 07:53:32.158671 2451 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.91:6443: connect: connection refused Jul 10 07:53:32.159282 kubelet[2451]: E0710 07:53:32.159077 2451 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.91:6443: connect: connection refused" logger="UnhandledError" Jul 10 07:53:32.160365 containerd[1564]: time="2025-07-10T07:53:32.160322209Z" level=info msg="StartContainer for \"64f2b088c78e42be00750715893942276066c053dcf294df3f3682756781ddcd\" returns successfully" Jul 10 07:53:32.172574 containerd[1564]: time="2025-07-10T07:53:32.172506061Z" level=info msg="StartContainer for \"8beae7f9d1920251cb76ccee138a7630e517b1fe9f19f7ec193abade5565c16d\" returns successfully" Jul 10 07:53:32.730643 kubelet[2451]: I0710 07:53:32.730600 2451 kubelet_node_status.go:72] "Attempting to register node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:34.430999 kubelet[2451]: I0710 07:53:34.430879 2451 kubelet_node_status.go:75] "Successfully registered node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:34.430999 kubelet[2451]: E0710 07:53:34.430949 2451 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\": node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" not found" Jul 10 07:53:34.491289 kubelet[2451]: E0710 07:53:34.491214 2451 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" not found" Jul 10 07:53:34.591939 kubelet[2451]: E0710 07:53:34.591885 2451 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" not found" Jul 10 07:53:35.084611 kubelet[2451]: I0710 07:53:35.084461 2451 apiserver.go:52] "Watching apiserver" Jul 10 07:53:35.127870 kubelet[2451]: I0710 07:53:35.127748 2451 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 07:53:35.484150 kubelet[2451]: W0710 07:53:35.483220 2451 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 07:53:37.096012 systemd[1]: Reload requested from client PID 2724 ('systemctl') (unit session-11.scope)... Jul 10 07:53:37.098172 systemd[1]: Reloading... Jul 10 07:53:37.236088 zram_generator::config[2769]: No configuration found. Jul 10 07:53:37.436679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 07:53:37.611750 systemd[1]: Reloading finished in 512 ms. Jul 10 07:53:37.649385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:37.654084 kubelet[2451]: I0710 07:53:37.651092 2451 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 07:53:37.675025 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 07:53:37.675882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:37.676044 systemd[1]: kubelet.service: Consumed 1.594s CPU time, 128.6M memory peak. Jul 10 07:53:37.680528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 07:53:38.243898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 07:53:38.266737 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 07:53:38.372649 kubelet[2833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 07:53:38.373690 kubelet[2833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 07:53:38.374007 kubelet[2833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 07:53:38.374007 kubelet[2833]: I0710 07:53:38.373859 2833 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 07:53:38.383716 kubelet[2833]: I0710 07:53:38.383670 2833 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 07:53:38.383994 kubelet[2833]: I0710 07:53:38.383904 2833 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 07:53:38.384403 kubelet[2833]: I0710 07:53:38.384385 2833 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 07:53:38.386479 kubelet[2833]: I0710 07:53:38.386460 2833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 07:53:38.390686 kubelet[2833]: I0710 07:53:38.390639 2833 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 07:53:38.417844 kubelet[2833]: I0710 07:53:38.417453 2833 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 07:53:38.435990 kubelet[2833]: I0710 07:53:38.435450 2833 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 07:53:38.435990 kubelet[2833]: I0710 07:53:38.435644 2833 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 07:53:38.435990 kubelet[2833]: I0710 07:53:38.435793 2833 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 07:53:38.438321 kubelet[2833]: I0710 07:53:38.435832 2833 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4391-0-0-n-fdb14ef6d8.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 07:53:38.438321 kubelet[2833]: I0710 07:53:38.438169 2833 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 07:53:38.438321 kubelet[2833]: I0710 07:53:38.438193 2833 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 07:53:38.439523 kubelet[2833]: I0710 07:53:38.438779 2833 state_mem.go:36] "Initialized new in-memory state store" Jul 10 07:53:38.439523 kubelet[2833]: I0710 07:53:38.439408 2833 kubelet.go:408] "Attempting to sync node with API server" Jul 10 07:53:38.442633 kubelet[2833]: I0710 07:53:38.441568 2833 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 07:53:38.442633 kubelet[2833]: I0710 07:53:38.441657 2833 kubelet.go:314] "Adding apiserver pod source" Jul 10 07:53:38.442633 kubelet[2833]: I0710 07:53:38.441682 2833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 07:53:38.449703 kubelet[2833]: I0710 07:53:38.449533 2833 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 07:53:38.452366 kubelet[2833]: I0710 07:53:38.452349 2833 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 07:53:38.454723 kubelet[2833]: I0710 07:53:38.454704 2833 server.go:1274] "Started kubelet" Jul 10 07:53:38.465804 kubelet[2833]: I0710 07:53:38.465711 2833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 07:53:38.470430 kubelet[2833]: I0710 07:53:38.470122 2833 server.go:449] "Adding debug handlers to kubelet server" Jul 10 07:53:38.472683 kubelet[2833]: I0710 07:53:38.472620 2833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 07:53:38.476654 kubelet[2833]: I0710 07:53:38.476633 2833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 07:53:38.497919 kubelet[2833]: I0710 07:53:38.495876 2833 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 07:53:38.497919 kubelet[2833]: I0710 07:53:38.477451 2833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 07:53:38.497919 kubelet[2833]: I0710 07:53:38.496904 2833 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 07:53:38.497919 kubelet[2833]: I0710 07:53:38.487897 2833 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 07:53:38.497919 kubelet[2833]: I0710 07:53:38.497250 2833 reconciler.go:26] "Reconciler: start to sync state" Jul 10 07:53:38.507474 kubelet[2833]: I0710 07:53:38.507393 2833 factory.go:221] Registration of the systemd container factory successfully Jul 10 07:53:38.508186 kubelet[2833]: I0710 07:53:38.508058 2833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 07:53:38.514063 kubelet[2833]: E0710 07:53:38.513539 2833 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 07:53:38.520748 kubelet[2833]: I0710 07:53:38.520603 2833 factory.go:221] Registration of the containerd container factory successfully Jul 10 07:53:38.544103 kubelet[2833]: I0710 07:53:38.543945 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 07:53:38.552458 kubelet[2833]: I0710 07:53:38.552317 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 07:53:38.552663 kubelet[2833]: I0710 07:53:38.552478 2833 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 07:53:38.552663 kubelet[2833]: I0710 07:53:38.552518 2833 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 07:53:38.552663 kubelet[2833]: E0710 07:53:38.552612 2833 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 07:53:38.624576 kubelet[2833]: I0710 07:53:38.624329 2833 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 07:53:38.624576 kubelet[2833]: I0710 07:53:38.624352 2833 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 07:53:38.624576 kubelet[2833]: I0710 07:53:38.624382 2833 state_mem.go:36] "Initialized new in-memory state store" Jul 10 07:53:38.625085 kubelet[2833]: I0710 07:53:38.625000 2833 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 07:53:38.625085 kubelet[2833]: I0710 07:53:38.625018 2833 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 07:53:38.625085 kubelet[2833]: I0710 07:53:38.625048 2833 policy_none.go:49] "None policy: Start" Jul 10 07:53:38.626606 kubelet[2833]: I0710 07:53:38.626043 2833 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 07:53:38.626606 kubelet[2833]: I0710 07:53:38.626096 2833 state_mem.go:35] "Initializing new in-memory state store" Jul 10 07:53:38.626606 kubelet[2833]: I0710 07:53:38.626245 2833 state_mem.go:75] "Updated machine memory state" Jul 10 07:53:38.639668 kubelet[2833]: I0710 07:53:38.639628 2833 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 07:53:38.644302 kubelet[2833]: I0710 07:53:38.644260 2833 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 07:53:38.644697 kubelet[2833]: I0710 07:53:38.644561 2833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 07:53:38.649336 kubelet[2833]: I0710 07:53:38.649288 2833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 07:53:38.690738 kubelet[2833]: W0710 07:53:38.689426 2833 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 07:53:38.692259 kubelet[2833]: W0710 07:53:38.689722 2833 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 07:53:38.693637 kubelet[2833]: E0710 07:53:38.692586 2833 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.693637 kubelet[2833]: W0710 07:53:38.691221 2833 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 07:53:38.702836 kubelet[2833]: I0710 07:53:38.701848 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-ca-certs\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.702836 kubelet[2833]: I0710 07:53:38.702497 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-k8s-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.703414 kubelet[2833]: I0710 07:53:38.703236 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.703519 kubelet[2833]: I0710 07:53:38.703408 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc6936fdb591d50d46c2c5c4d5c57ff4-kubeconfig\") pod \"kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"cc6936fdb591d50d46c2c5c4d5c57ff4\") " pod="kube-system/kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.703759 kubelet[2833]: I0710 07:53:38.703688 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-k8s-certs\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.704001 kubelet[2833]: I0710 07:53:38.703745 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f2126965e94e01c570d47260dd27edb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"4f2126965e94e01c570d47260dd27edb\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.704001 kubelet[2833]: I0710 07:53:38.703841 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-ca-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.704001 kubelet[2833]: I0710 07:53:38.703912 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-flexvolume-dir\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.704001 kubelet[2833]: I0710 07:53:38.703948 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c4cd7c110ee20037243a111b3cbe07f-kubeconfig\") pod \"kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal\" (UID: \"3c4cd7c110ee20037243a111b3cbe07f\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.763847 kubelet[2833]: I0710 07:53:38.763703 2833 kubelet_node_status.go:72] "Attempting to register node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.782297 kubelet[2833]: I0710 07:53:38.782241 2833 kubelet_node_status.go:111] "Node was previously registered" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:38.782504 kubelet[2833]: I0710 07:53:38.782479 2833 kubelet_node_status.go:75] "Successfully registered node" node="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:39.447350 kubelet[2833]: I0710 07:53:39.447231 2833 apiserver.go:52] "Watching apiserver" Jul 10 07:53:39.497212 kubelet[2833]: I0710 07:53:39.497096 2833 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 07:53:39.612611 kubelet[2833]: W0710 07:53:39.612249 2833 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 07:53:39.612611 kubelet[2833]: E0710 07:53:39.612334 2833 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:53:39.693471 kubelet[2833]: I0710 07:53:39.693140 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4391-0-0-n-fdb14ef6d8.novalocal" podStartSLOduration=1.693090718 podStartE2EDuration="1.693090718s" podCreationTimestamp="2025-07-10 07:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:53:39.661005207 +0000 UTC m=+1.372027295" watchObservedRunningTime="2025-07-10 07:53:39.693090718 +0000 UTC m=+1.404112797" Jul 10 07:53:39.695515 kubelet[2833]: I0710 07:53:39.694782 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4391-0-0-n-fdb14ef6d8.novalocal" podStartSLOduration=4.694764673 podStartE2EDuration="4.694764673s" podCreationTimestamp="2025-07-10 07:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:53:39.694596206 +0000 UTC m=+1.405618274" watchObservedRunningTime="2025-07-10 07:53:39.694764673 +0000 UTC m=+1.405786751" Jul 10 07:53:39.709771 kubelet[2833]: I0710 07:53:39.709606 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-fdb14ef6d8.novalocal" podStartSLOduration=1.7095853189999999 podStartE2EDuration="1.709585319s" podCreationTimestamp="2025-07-10 07:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:53:39.707474594 +0000 UTC m=+1.418496662" watchObservedRunningTime="2025-07-10 07:53:39.709585319 +0000 UTC m=+1.420607387" Jul 10 07:53:41.920334 kubelet[2833]: I0710 07:53:41.920098 2833 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 07:53:41.924270 containerd[1564]: time="2025-07-10T07:53:41.923392857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 07:53:41.925298 kubelet[2833]: I0710 07:53:41.924094 2833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 07:53:42.923706 systemd[1]: Created slice kubepods-besteffort-pod7e6cb88b_d035_4b6e_9cf7_8a9c1a8aae14.slice - libcontainer container kubepods-besteffort-pod7e6cb88b_d035_4b6e_9cf7_8a9c1a8aae14.slice. Jul 10 07:53:42.938207 kubelet[2833]: I0710 07:53:42.938127 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14-lib-modules\") pod \"kube-proxy-tq7cn\" (UID: \"7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14\") " pod="kube-system/kube-proxy-tq7cn" Jul 10 07:53:42.941632 kubelet[2833]: I0710 07:53:42.941563 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s925x\" (UniqueName: \"kubernetes.io/projected/7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14-kube-api-access-s925x\") pod \"kube-proxy-tq7cn\" (UID: \"7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14\") " pod="kube-system/kube-proxy-tq7cn" Jul 10 07:53:42.941845 kubelet[2833]: I0710 07:53:42.941785 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14-kube-proxy\") pod \"kube-proxy-tq7cn\" (UID: \"7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14\") " pod="kube-system/kube-proxy-tq7cn" Jul 10 07:53:42.941845 kubelet[2833]: I0710 07:53:42.941820 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14-xtables-lock\") pod \"kube-proxy-tq7cn\" (UID: \"7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14\") " pod="kube-system/kube-proxy-tq7cn" Jul 10 07:53:43.009219 systemd[1]: Created slice kubepods-besteffort-pod5a789180_b392_462b_99c0_7e066ecfa04c.slice - libcontainer container kubepods-besteffort-pod5a789180_b392_462b_99c0_7e066ecfa04c.slice. Jul 10 07:53:43.044143 kubelet[2833]: I0710 07:53:43.043002 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a789180-b392-462b-99c0-7e066ecfa04c-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-l75jh\" (UID: \"5a789180-b392-462b-99c0-7e066ecfa04c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-l75jh" Jul 10 07:53:43.044143 kubelet[2833]: I0710 07:53:43.043072 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxw5t\" (UniqueName: \"kubernetes.io/projected/5a789180-b392-462b-99c0-7e066ecfa04c-kube-api-access-sxw5t\") pod \"tigera-operator-5bf8dfcb4-l75jh\" (UID: \"5a789180-b392-462b-99c0-7e066ecfa04c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-l75jh" Jul 10 07:53:43.241646 containerd[1564]: time="2025-07-10T07:53:43.240112428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tq7cn,Uid:7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14,Namespace:kube-system,Attempt:0,}" Jul 10 07:53:43.313929 containerd[1564]: time="2025-07-10T07:53:43.313811402Z" level=info msg="connecting to shim 1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418" address="unix:///run/containerd/s/3d0cbe0973eb8155d365624f9074f68b8b4de3dd087b0312fcdc509ad401ebb0" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:43.318690 containerd[1564]: time="2025-07-10T07:53:43.318550621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-l75jh,Uid:5a789180-b392-462b-99c0-7e066ecfa04c,Namespace:tigera-operator,Attempt:0,}" Jul 10 07:53:43.369001 containerd[1564]: time="2025-07-10T07:53:43.368858256Z" level=info msg="connecting to shim 480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc" address="unix:///run/containerd/s/4d7b2c185875ff3f9039d397b526939d4357d9aff5b3fb11d95396ab82f94747" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:43.378331 systemd[1]: Started cri-containerd-1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418.scope - libcontainer container 1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418. Jul 10 07:53:43.410220 systemd[1]: Started cri-containerd-480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc.scope - libcontainer container 480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc. Jul 10 07:53:43.432159 containerd[1564]: time="2025-07-10T07:53:43.432101930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tq7cn,Uid:7e6cb88b-d035-4b6e-9cf7-8a9c1a8aae14,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418\"" Jul 10 07:53:43.446178 containerd[1564]: time="2025-07-10T07:53:43.445895809Z" level=info msg="CreateContainer within sandbox \"1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 07:53:43.476978 containerd[1564]: time="2025-07-10T07:53:43.476885183Z" level=info msg="Container 94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:53:43.496140 containerd[1564]: time="2025-07-10T07:53:43.495561389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-l75jh,Uid:5a789180-b392-462b-99c0-7e066ecfa04c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc\"" Jul 10 07:53:43.496804 containerd[1564]: time="2025-07-10T07:53:43.496746795Z" level=info msg="CreateContainer within sandbox \"1c00e43a501106da01d86d5788e7552ad7fc41708ceb7d2e2d05cab546393418\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5\"" Jul 10 07:53:43.498190 containerd[1564]: time="2025-07-10T07:53:43.498142035Z" level=info msg="StartContainer for \"94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5\"" Jul 10 07:53:43.500349 containerd[1564]: time="2025-07-10T07:53:43.500250573Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 07:53:43.511609 containerd[1564]: time="2025-07-10T07:53:43.511216142Z" level=info msg="connecting to shim 94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5" address="unix:///run/containerd/s/3d0cbe0973eb8155d365624f9074f68b8b4de3dd087b0312fcdc509ad401ebb0" protocol=ttrpc version=3 Jul 10 07:53:43.542133 systemd[1]: Started cri-containerd-94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5.scope - libcontainer container 94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5. Jul 10 07:53:43.600926 containerd[1564]: time="2025-07-10T07:53:43.600837712Z" level=info msg="StartContainer for \"94edc374832a95248264e898008b3c7e798a60fb1dc7e3fe0d799c567f46ded5\" returns successfully" Jul 10 07:53:43.633414 kubelet[2833]: I0710 07:53:43.633277 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tq7cn" podStartSLOduration=1.632942029 podStartE2EDuration="1.632942029s" podCreationTimestamp="2025-07-10 07:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:53:43.63238827 +0000 UTC m=+5.343410368" watchObservedRunningTime="2025-07-10 07:53:43.632942029 +0000 UTC m=+5.343964097" Jul 10 07:53:45.571466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429803354.mount: Deactivated successfully. Jul 10 07:53:46.354834 containerd[1564]: time="2025-07-10T07:53:46.354635599Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:46.357325 containerd[1564]: time="2025-07-10T07:53:46.357250248Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 07:53:46.358651 containerd[1564]: time="2025-07-10T07:53:46.358586206Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:46.362299 containerd[1564]: time="2025-07-10T07:53:46.362223273Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:53:46.363229 containerd[1564]: time="2025-07-10T07:53:46.362935210Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.862574429s" Jul 10 07:53:46.363229 containerd[1564]: time="2025-07-10T07:53:46.363072357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 07:53:46.368151 containerd[1564]: time="2025-07-10T07:53:46.368102780Z" level=info msg="CreateContainer within sandbox \"480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 07:53:46.387873 containerd[1564]: time="2025-07-10T07:53:46.387812720Z" level=info msg="Container 42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:53:46.391457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285491413.mount: Deactivated successfully. Jul 10 07:53:46.409007 containerd[1564]: time="2025-07-10T07:53:46.407817553Z" level=info msg="CreateContainer within sandbox \"480dc8764692207d09161ea5dd3c34450b3f13d8c7a82df253e815003e9038cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6\"" Jul 10 07:53:46.414777 containerd[1564]: time="2025-07-10T07:53:46.414711354Z" level=info msg="StartContainer for \"42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6\"" Jul 10 07:53:46.416586 containerd[1564]: time="2025-07-10T07:53:46.416536620Z" level=info msg="connecting to shim 42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6" address="unix:///run/containerd/s/4d7b2c185875ff3f9039d397b526939d4357d9aff5b3fb11d95396ab82f94747" protocol=ttrpc version=3 Jul 10 07:53:46.458136 systemd[1]: Started cri-containerd-42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6.scope - libcontainer container 42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6. Jul 10 07:53:46.498068 containerd[1564]: time="2025-07-10T07:53:46.498017225Z" level=info msg="StartContainer for \"42175d9e1b9a83def05fc12ef2fbf8edd080809cd9021f03c77ca90f61c278a6\" returns successfully" Jul 10 07:53:46.704578 kubelet[2833]: I0710 07:53:46.702501 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-l75jh" podStartSLOduration=1.8362994929999998 podStartE2EDuration="4.7023349s" podCreationTimestamp="2025-07-10 07:53:42 +0000 UTC" firstStartedPulling="2025-07-10 07:53:43.498948288 +0000 UTC m=+5.209970356" lastFinishedPulling="2025-07-10 07:53:46.364983695 +0000 UTC m=+8.076005763" observedRunningTime="2025-07-10 07:53:46.664556683 +0000 UTC m=+8.375578801" watchObservedRunningTime="2025-07-10 07:53:46.7023349 +0000 UTC m=+8.413357058" Jul 10 07:53:51.625378 sudo[1853]: pam_unix(sudo:session): session closed for user root Jul 10 07:53:51.838799 sshd[1852]: Connection closed by 172.24.4.1 port 58592 Jul 10 07:53:51.842067 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Jul 10 07:53:51.856365 systemd[1]: sshd@8-172.24.4.91:22-172.24.4.1:58592.service: Deactivated successfully. Jul 10 07:53:51.864873 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 07:53:51.866300 systemd[1]: session-11.scope: Consumed 8.099s CPU time, 224.7M memory peak. Jul 10 07:53:51.870034 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Jul 10 07:53:51.872865 systemd-logind[1534]: Removed session 11. Jul 10 07:53:56.458352 systemd[1]: Created slice kubepods-besteffort-pod57bde409_5f77_4231_bb25_9ef5b04bf5b2.slice - libcontainer container kubepods-besteffort-pod57bde409_5f77_4231_bb25_9ef5b04bf5b2.slice. Jul 10 07:53:56.555383 kubelet[2833]: I0710 07:53:56.555281 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bde409-5f77-4231-bb25-9ef5b04bf5b2-tigera-ca-bundle\") pod \"calico-typha-544986ff9c-gmzcz\" (UID: \"57bde409-5f77-4231-bb25-9ef5b04bf5b2\") " pod="calico-system/calico-typha-544986ff9c-gmzcz" Jul 10 07:53:56.556283 kubelet[2833]: I0710 07:53:56.556094 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/57bde409-5f77-4231-bb25-9ef5b04bf5b2-typha-certs\") pod \"calico-typha-544986ff9c-gmzcz\" (UID: \"57bde409-5f77-4231-bb25-9ef5b04bf5b2\") " pod="calico-system/calico-typha-544986ff9c-gmzcz" Jul 10 07:53:56.556283 kubelet[2833]: I0710 07:53:56.556212 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnmd9\" (UniqueName: \"kubernetes.io/projected/57bde409-5f77-4231-bb25-9ef5b04bf5b2-kube-api-access-xnmd9\") pod \"calico-typha-544986ff9c-gmzcz\" (UID: \"57bde409-5f77-4231-bb25-9ef5b04bf5b2\") " pod="calico-system/calico-typha-544986ff9c-gmzcz" Jul 10 07:53:56.635046 systemd[1]: Created slice kubepods-besteffort-podaeb32af3_5fca_4e68_971a_632210d0687c.slice - libcontainer container kubepods-besteffort-podaeb32af3_5fca_4e68_971a_632210d0687c.slice. Jul 10 07:53:56.657669 kubelet[2833]: I0710 07:53:56.657153 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-xtables-lock\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657669 kubelet[2833]: I0710 07:53:56.657208 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-cni-log-dir\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657669 kubelet[2833]: I0710 07:53:56.657262 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-flexvol-driver-host\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657669 kubelet[2833]: I0710 07:53:56.657292 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-lib-modules\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657669 kubelet[2833]: I0710 07:53:56.657318 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gx4v\" (UniqueName: \"kubernetes.io/projected/aeb32af3-5fca-4e68-971a-632210d0687c-kube-api-access-7gx4v\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657996 kubelet[2833]: I0710 07:53:56.657449 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-var-run-calico\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657996 kubelet[2833]: I0710 07:53:56.657500 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-cni-bin-dir\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657996 kubelet[2833]: I0710 07:53:56.657790 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aeb32af3-5fca-4e68-971a-632210d0687c-node-certs\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657996 kubelet[2833]: I0710 07:53:56.657824 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-policysync\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.657996 kubelet[2833]: I0710 07:53:56.657865 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeb32af3-5fca-4e68-971a-632210d0687c-tigera-ca-bundle\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.658167 kubelet[2833]: I0710 07:53:56.657906 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-cni-net-dir\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.658167 kubelet[2833]: I0710 07:53:56.657935 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aeb32af3-5fca-4e68-971a-632210d0687c-var-lib-calico\") pod \"calico-node-gnx7g\" (UID: \"aeb32af3-5fca-4e68-971a-632210d0687c\") " pod="calico-system/calico-node-gnx7g" Jul 10 07:53:56.765444 kubelet[2833]: E0710 07:53:56.764421 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.765444 kubelet[2833]: W0710 07:53:56.765012 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.765444 kubelet[2833]: E0710 07:53:56.765097 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.770430 containerd[1564]: time="2025-07-10T07:53:56.770351922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-544986ff9c-gmzcz,Uid:57bde409-5f77-4231-bb25-9ef5b04bf5b2,Namespace:calico-system,Attempt:0,}" Jul 10 07:53:56.775601 kubelet[2833]: E0710 07:53:56.775495 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.775601 kubelet[2833]: W0710 07:53:56.775539 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.776265 kubelet[2833]: E0710 07:53:56.775564 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.793079 kubelet[2833]: E0710 07:53:56.792982 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.793079 kubelet[2833]: W0710 07:53:56.793018 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.793079 kubelet[2833]: E0710 07:53:56.793044 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.828771 containerd[1564]: time="2025-07-10T07:53:56.828693063Z" level=info msg="connecting to shim a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197" address="unix:///run/containerd/s/7449dac75553d18e57d602e1c190edf991539f2a0a86bfd990b85948cac7922a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:56.896257 systemd[1]: Started cri-containerd-a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197.scope - libcontainer container a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197. Jul 10 07:53:56.904306 kubelet[2833]: E0710 07:53:56.904222 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:53:56.941168 containerd[1564]: time="2025-07-10T07:53:56.941045139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gnx7g,Uid:aeb32af3-5fca-4e68-971a-632210d0687c,Namespace:calico-system,Attempt:0,}" Jul 10 07:53:56.944943 kubelet[2833]: E0710 07:53:56.944898 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.945183 kubelet[2833]: W0710 07:53:56.944944 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.945183 kubelet[2833]: E0710 07:53:56.945171 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.946266 kubelet[2833]: E0710 07:53:56.946240 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.946266 kubelet[2833]: W0710 07:53:56.946263 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.946432 kubelet[2833]: E0710 07:53:56.946282 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.947785 kubelet[2833]: E0710 07:53:56.947341 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.947785 kubelet[2833]: W0710 07:53:56.947361 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.947785 kubelet[2833]: E0710 07:53:56.947377 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.950018 kubelet[2833]: E0710 07:53:56.949982 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.950018 kubelet[2833]: W0710 07:53:56.950007 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.950178 kubelet[2833]: E0710 07:53:56.950027 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950361 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.951172 kubelet[2833]: W0710 07:53:56.950378 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950390 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950539 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.951172 kubelet[2833]: W0710 07:53:56.950548 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950558 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950692 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.951172 kubelet[2833]: W0710 07:53:56.950702 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950711 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.951172 kubelet[2833]: E0710 07:53:56.950893 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.952681 kubelet[2833]: W0710 07:53:56.950904 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.952681 kubelet[2833]: E0710 07:53:56.950914 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.952681 kubelet[2833]: E0710 07:53:56.951229 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.952681 kubelet[2833]: W0710 07:53:56.951241 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.952681 kubelet[2833]: E0710 07:53:56.951253 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.952681 kubelet[2833]: E0710 07:53:56.952169 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.952681 kubelet[2833]: W0710 07:53:56.952180 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.952681 kubelet[2833]: E0710 07:53:56.952191 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.952988 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.953784 kubelet[2833]: W0710 07:53:56.953000 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.953011 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.953159 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.953784 kubelet[2833]: W0710 07:53:56.953169 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.953178 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.953739 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.953784 kubelet[2833]: W0710 07:53:56.953751 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.953784 kubelet[2833]: E0710 07:53:56.953762 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.953923 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.955362 kubelet[2833]: W0710 07:53:56.953935 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.953945 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.954584 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.955362 kubelet[2833]: W0710 07:53:56.954596 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.954608 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.954758 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.955362 kubelet[2833]: W0710 07:53:56.954771 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.954781 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.955362 kubelet[2833]: E0710 07:53:56.954935 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.957808 kubelet[2833]: W0710 07:53:56.954945 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.954993 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.955853 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.957808 kubelet[2833]: W0710 07:53:56.955865 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.955877 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.956652 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.957808 kubelet[2833]: W0710 07:53:56.956663 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.956676 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.957808 kubelet[2833]: E0710 07:53:56.956852 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.957808 kubelet[2833]: W0710 07:53:56.956863 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.959621 kubelet[2833]: E0710 07:53:56.956872 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.963375 kubelet[2833]: E0710 07:53:56.963321 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.963375 kubelet[2833]: W0710 07:53:56.963353 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.963553 kubelet[2833]: E0710 07:53:56.963398 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.963553 kubelet[2833]: I0710 07:53:56.963445 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b536d42-7d30-4147-a6e2-348f9b0f4c7a-kubelet-dir\") pod \"csi-node-driver-pwddw\" (UID: \"6b536d42-7d30-4147-a6e2-348f9b0f4c7a\") " pod="calico-system/csi-node-driver-pwddw" Jul 10 07:53:56.964005 kubelet[2833]: E0710 07:53:56.963709 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.964005 kubelet[2833]: W0710 07:53:56.963730 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.964866 kubelet[2833]: E0710 07:53:56.964548 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.964866 kubelet[2833]: I0710 07:53:56.964591 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b536d42-7d30-4147-a6e2-348f9b0f4c7a-varrun\") pod \"csi-node-driver-pwddw\" (UID: \"6b536d42-7d30-4147-a6e2-348f9b0f4c7a\") " pod="calico-system/csi-node-driver-pwddw" Jul 10 07:53:56.965120 kubelet[2833]: E0710 07:53:56.965093 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.965207 kubelet[2833]: W0710 07:53:56.965189 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.965530 kubelet[2833]: E0710 07:53:56.965472 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.965875 kubelet[2833]: E0710 07:53:56.965833 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.965875 kubelet[2833]: W0710 07:53:56.965870 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.965911 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.966799 kubelet[2833]: I0710 07:53:56.965945 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b536d42-7d30-4147-a6e2-348f9b0f4c7a-registration-dir\") pod \"csi-node-driver-pwddw\" (UID: \"6b536d42-7d30-4147-a6e2-348f9b0f4c7a\") " pod="calico-system/csi-node-driver-pwddw" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.966265 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.966799 kubelet[2833]: W0710 07:53:56.966279 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.966304 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.966510 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.966799 kubelet[2833]: W0710 07:53:56.966524 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.966539 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.966799 kubelet[2833]: E0710 07:53:56.966772 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.967529 kubelet[2833]: W0710 07:53:56.966783 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.967529 kubelet[2833]: E0710 07:53:56.966804 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.968927 kubelet[2833]: E0710 07:53:56.968892 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.968927 kubelet[2833]: W0710 07:53:56.968924 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.969122 kubelet[2833]: E0710 07:53:56.968950 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.970493 kubelet[2833]: E0710 07:53:56.970459 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.970493 kubelet[2833]: W0710 07:53:56.970485 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.971027 kubelet[2833]: E0710 07:53:56.970509 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.971027 kubelet[2833]: E0710 07:53:56.970918 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.971027 kubelet[2833]: W0710 07:53:56.970930 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.971027 kubelet[2833]: E0710 07:53:56.970944 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.971027 kubelet[2833]: I0710 07:53:56.971011 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85zh\" (UniqueName: \"kubernetes.io/projected/6b536d42-7d30-4147-a6e2-348f9b0f4c7a-kube-api-access-q85zh\") pod \"csi-node-driver-pwddw\" (UID: \"6b536d42-7d30-4147-a6e2-348f9b0f4c7a\") " pod="calico-system/csi-node-driver-pwddw" Jul 10 07:53:56.972297 kubelet[2833]: E0710 07:53:56.972271 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.972297 kubelet[2833]: W0710 07:53:56.972291 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.972401 kubelet[2833]: E0710 07:53:56.972326 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.972401 kubelet[2833]: I0710 07:53:56.972352 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b536d42-7d30-4147-a6e2-348f9b0f4c7a-socket-dir\") pod \"csi-node-driver-pwddw\" (UID: \"6b536d42-7d30-4147-a6e2-348f9b0f4c7a\") " pod="calico-system/csi-node-driver-pwddw" Jul 10 07:53:56.972845 kubelet[2833]: E0710 07:53:56.972816 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.972845 kubelet[2833]: W0710 07:53:56.972843 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.973179 kubelet[2833]: E0710 07:53:56.972872 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.973343 kubelet[2833]: E0710 07:53:56.973230 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.973343 kubelet[2833]: W0710 07:53:56.973253 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.973343 kubelet[2833]: E0710 07:53:56.973309 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.974476 kubelet[2833]: E0710 07:53:56.973828 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.974476 kubelet[2833]: W0710 07:53:56.973850 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.974476 kubelet[2833]: E0710 07:53:56.974106 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.974850 kubelet[2833]: E0710 07:53:56.974825 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:56.974850 kubelet[2833]: W0710 07:53:56.974846 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:56.974945 kubelet[2833]: E0710 07:53:56.974861 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:56.990758 containerd[1564]: time="2025-07-10T07:53:56.990569957Z" level=info msg="connecting to shim 8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff" address="unix:///run/containerd/s/3557534d67c0a5a7a5d5043935b5cdf3b9869598e5562430d8d53915dfdb63dc" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:53:57.040441 systemd[1]: Started cri-containerd-8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff.scope - libcontainer container 8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff. Jul 10 07:53:57.075564 kubelet[2833]: E0710 07:53:57.074353 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.075859 kubelet[2833]: W0710 07:53:57.075833 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.076012 kubelet[2833]: E0710 07:53:57.075994 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.076518 kubelet[2833]: E0710 07:53:57.076474 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.076518 kubelet[2833]: W0710 07:53:57.076490 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.076708 kubelet[2833]: E0710 07:53:57.076639 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.078550 kubelet[2833]: E0710 07:53:57.078335 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.078550 kubelet[2833]: W0710 07:53:57.078373 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.078550 kubelet[2833]: E0710 07:53:57.078407 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.080120 kubelet[2833]: E0710 07:53:57.080084 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.080120 kubelet[2833]: W0710 07:53:57.080110 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.080369 kubelet[2833]: E0710 07:53:57.080132 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.081300 kubelet[2833]: E0710 07:53:57.081136 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.081300 kubelet[2833]: W0710 07:53:57.081156 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.081300 kubelet[2833]: E0710 07:53:57.081169 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.082212 kubelet[2833]: E0710 07:53:57.082151 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.082212 kubelet[2833]: W0710 07:53:57.082171 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.085984 kubelet[2833]: E0710 07:53:57.085673 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.085984 kubelet[2833]: W0710 07:53:57.085704 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.085984 kubelet[2833]: E0710 07:53:57.085926 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.085984 kubelet[2833]: W0710 07:53:57.085936 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.085984 kubelet[2833]: E0710 07:53:57.085969 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.087328 kubelet[2833]: E0710 07:53:57.087290 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.087328 kubelet[2833]: W0710 07:53:57.087309 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.087328 kubelet[2833]: E0710 07:53:57.087331 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.087998 kubelet[2833]: E0710 07:53:57.087717 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.087998 kubelet[2833]: W0710 07:53:57.087735 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.087998 kubelet[2833]: E0710 07:53:57.087814 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.088761 kubelet[2833]: E0710 07:53:57.088544 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.088761 kubelet[2833]: W0710 07:53:57.088568 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.088761 kubelet[2833]: E0710 07:53:57.088580 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.090605 kubelet[2833]: E0710 07:53:57.090569 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.090683 kubelet[2833]: E0710 07:53:57.090638 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.091315 kubelet[2833]: E0710 07:53:57.091053 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.091315 kubelet[2833]: W0710 07:53:57.091099 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.091315 kubelet[2833]: E0710 07:53:57.091114 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.092099 kubelet[2833]: E0710 07:53:57.091715 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.092152 kubelet[2833]: W0710 07:53:57.091950 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.092152 kubelet[2833]: E0710 07:53:57.092143 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.093720 kubelet[2833]: E0710 07:53:57.093158 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.093720 kubelet[2833]: W0710 07:53:57.093176 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.093720 kubelet[2833]: E0710 07:53:57.093195 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.093927 kubelet[2833]: E0710 07:53:57.093815 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.094777 kubelet[2833]: W0710 07:53:57.094742 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.094849 kubelet[2833]: E0710 07:53:57.094780 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.095302 kubelet[2833]: E0710 07:53:57.095270 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.095302 kubelet[2833]: W0710 07:53:57.095289 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.095379 kubelet[2833]: E0710 07:53:57.095308 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.096019 containerd[1564]: time="2025-07-10T07:53:57.095582006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-544986ff9c-gmzcz,Uid:57bde409-5f77-4231-bb25-9ef5b04bf5b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197\"" Jul 10 07:53:57.096438 kubelet[2833]: E0710 07:53:57.096264 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.096438 kubelet[2833]: W0710 07:53:57.096287 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.096438 kubelet[2833]: E0710 07:53:57.096300 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.098180 kubelet[2833]: E0710 07:53:57.098067 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.098180 kubelet[2833]: W0710 07:53:57.098115 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.098180 kubelet[2833]: E0710 07:53:57.098148 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.099032 kubelet[2833]: E0710 07:53:57.098872 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.099032 kubelet[2833]: W0710 07:53:57.098886 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.099032 kubelet[2833]: E0710 07:53:57.098905 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.099981 kubelet[2833]: E0710 07:53:57.099505 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.099981 kubelet[2833]: W0710 07:53:57.099527 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.099981 kubelet[2833]: E0710 07:53:57.099542 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.100223 kubelet[2833]: E0710 07:53:57.100208 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.100466 kubelet[2833]: W0710 07:53:57.100448 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.100698 kubelet[2833]: E0710 07:53:57.100573 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.101920 kubelet[2833]: E0710 07:53:57.101857 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.101920 kubelet[2833]: W0710 07:53:57.101873 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.101920 kubelet[2833]: E0710 07:53:57.101892 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.102461 kubelet[2833]: E0710 07:53:57.102267 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.102461 kubelet[2833]: W0710 07:53:57.102281 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.102461 kubelet[2833]: E0710 07:53:57.102298 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.102848 kubelet[2833]: E0710 07:53:57.102739 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.103106 kubelet[2833]: W0710 07:53:57.102923 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.103106 kubelet[2833]: E0710 07:53:57.102948 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.103677 containerd[1564]: time="2025-07-10T07:53:57.103523476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 07:53:57.104389 kubelet[2833]: E0710 07:53:57.104359 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.105999 kubelet[2833]: W0710 07:53:57.104557 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.105999 kubelet[2833]: E0710 07:53:57.104578 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.123381 kubelet[2833]: E0710 07:53:57.123348 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:53:57.123815 kubelet[2833]: W0710 07:53:57.123795 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:53:57.124023 kubelet[2833]: E0710 07:53:57.123893 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:53:57.194708 containerd[1564]: time="2025-07-10T07:53:57.194196278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gnx7g,Uid:aeb32af3-5fca-4e68-971a-632210d0687c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\"" Jul 10 07:53:58.556379 kubelet[2833]: E0710 07:53:58.555296 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:53:59.515364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216534967.mount: Deactivated successfully. Jul 10 07:54:00.553135 kubelet[2833]: E0710 07:54:00.553030 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:00.780793 containerd[1564]: time="2025-07-10T07:54:00.780710943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:00.782409 containerd[1564]: time="2025-07-10T07:54:00.782288512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 07:54:00.783440 containerd[1564]: time="2025-07-10T07:54:00.783401651Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:00.790396 containerd[1564]: time="2025-07-10T07:54:00.790340950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:00.791151 containerd[1564]: time="2025-07-10T07:54:00.791105915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.687529821s" Jul 10 07:54:00.791151 containerd[1564]: time="2025-07-10T07:54:00.791141652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 07:54:00.794239 containerd[1564]: time="2025-07-10T07:54:00.793948789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 07:54:00.815014 containerd[1564]: time="2025-07-10T07:54:00.814514705Z" level=info msg="CreateContainer within sandbox \"a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 07:54:00.828309 containerd[1564]: time="2025-07-10T07:54:00.828262608Z" level=info msg="Container c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:00.831830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506173308.mount: Deactivated successfully. Jul 10 07:54:00.845694 containerd[1564]: time="2025-07-10T07:54:00.845633810Z" level=info msg="CreateContainer within sandbox \"a96dc83f34c459ee7d0d316c768b5bb164abf695e4a217a247a8c6ee416f0197\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20\"" Jul 10 07:54:00.846334 containerd[1564]: time="2025-07-10T07:54:00.846304478Z" level=info msg="StartContainer for \"c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20\"" Jul 10 07:54:00.847650 containerd[1564]: time="2025-07-10T07:54:00.847605229Z" level=info msg="connecting to shim c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20" address="unix:///run/containerd/s/7449dac75553d18e57d602e1c190edf991539f2a0a86bfd990b85948cac7922a" protocol=ttrpc version=3 Jul 10 07:54:00.891657 systemd[1]: Started cri-containerd-c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20.scope - libcontainer container c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20. Jul 10 07:54:01.033402 containerd[1564]: time="2025-07-10T07:54:01.033281662Z" level=info msg="StartContainer for \"c96f19f61e03d61ebc20a6bfaac5871828052b00d3281c7637f9da57e0c35f20\" returns successfully" Jul 10 07:54:01.733826 kubelet[2833]: I0710 07:54:01.732984 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-544986ff9c-gmzcz" podStartSLOduration=2.041846274 podStartE2EDuration="5.732937517s" podCreationTimestamp="2025-07-10 07:53:56 +0000 UTC" firstStartedPulling="2025-07-10 07:53:57.102242382 +0000 UTC m=+18.813264460" lastFinishedPulling="2025-07-10 07:54:00.793333635 +0000 UTC m=+22.504355703" observedRunningTime="2025-07-10 07:54:01.731607041 +0000 UTC m=+23.442629109" watchObservedRunningTime="2025-07-10 07:54:01.732937517 +0000 UTC m=+23.443959585" Jul 10 07:54:01.805090 kubelet[2833]: E0710 07:54:01.804643 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.805090 kubelet[2833]: W0710 07:54:01.804700 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.805090 kubelet[2833]: E0710 07:54:01.804789 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.805698 kubelet[2833]: E0710 07:54:01.805406 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.805698 kubelet[2833]: W0710 07:54:01.805431 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.805698 kubelet[2833]: E0710 07:54:01.805455 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.806115 kubelet[2833]: E0710 07:54:01.805759 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.806115 kubelet[2833]: W0710 07:54:01.805783 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.806115 kubelet[2833]: E0710 07:54:01.805806 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.806448 kubelet[2833]: E0710 07:54:01.806183 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.806448 kubelet[2833]: W0710 07:54:01.806208 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.806448 kubelet[2833]: E0710 07:54:01.806231 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.806913 kubelet[2833]: E0710 07:54:01.806581 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.806913 kubelet[2833]: W0710 07:54:01.806605 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.806913 kubelet[2833]: E0710 07:54:01.806632 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.807839 kubelet[2833]: E0710 07:54:01.807212 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.807839 kubelet[2833]: W0710 07:54:01.807238 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.807839 kubelet[2833]: E0710 07:54:01.807264 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.807839 kubelet[2833]: E0710 07:54:01.807565 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.807839 kubelet[2833]: W0710 07:54:01.807588 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.807839 kubelet[2833]: E0710 07:54:01.807610 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.808931 kubelet[2833]: E0710 07:54:01.808119 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.808931 kubelet[2833]: W0710 07:54:01.808145 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.808931 kubelet[2833]: E0710 07:54:01.808170 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.808931 kubelet[2833]: E0710 07:54:01.808818 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.808931 kubelet[2833]: W0710 07:54:01.808842 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.808931 kubelet[2833]: E0710 07:54:01.808866 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.810142 kubelet[2833]: E0710 07:54:01.809649 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.810142 kubelet[2833]: W0710 07:54:01.809676 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.810142 kubelet[2833]: E0710 07:54:01.809702 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.811329 kubelet[2833]: E0710 07:54:01.810658 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.811329 kubelet[2833]: W0710 07:54:01.810696 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.811329 kubelet[2833]: E0710 07:54:01.810722 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.811329 kubelet[2833]: E0710 07:54:01.811124 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.811329 kubelet[2833]: W0710 07:54:01.811148 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.811329 kubelet[2833]: E0710 07:54:01.811206 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.812219 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.813665 kubelet[2833]: W0710 07:54:01.812246 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.812271 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.812577 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.813665 kubelet[2833]: W0710 07:54:01.812599 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.812621 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.813082 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.813665 kubelet[2833]: W0710 07:54:01.813135 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.813665 kubelet[2833]: E0710 07:54:01.813160 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.837255 kubelet[2833]: E0710 07:54:01.837188 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.837255 kubelet[2833]: W0710 07:54:01.837235 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.837483 kubelet[2833]: E0710 07:54:01.837269 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.838000 kubelet[2833]: E0710 07:54:01.837931 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.838072 kubelet[2833]: W0710 07:54:01.838047 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.838103 kubelet[2833]: E0710 07:54:01.838085 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.838542 kubelet[2833]: E0710 07:54:01.838512 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.838605 kubelet[2833]: W0710 07:54:01.838545 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.838643 kubelet[2833]: E0710 07:54:01.838610 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.839040 kubelet[2833]: E0710 07:54:01.839005 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.839095 kubelet[2833]: W0710 07:54:01.839043 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.839095 kubelet[2833]: E0710 07:54:01.839070 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.839553 kubelet[2833]: E0710 07:54:01.839508 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.839553 kubelet[2833]: W0710 07:54:01.839545 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.839638 kubelet[2833]: E0710 07:54:01.839582 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.839938 kubelet[2833]: E0710 07:54:01.839909 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.840010 kubelet[2833]: W0710 07:54:01.839940 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.840103 kubelet[2833]: E0710 07:54:01.840070 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.840453 kubelet[2833]: E0710 07:54:01.840424 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.840554 kubelet[2833]: W0710 07:54:01.840457 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.840554 kubelet[2833]: E0710 07:54:01.840491 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.840886 kubelet[2833]: E0710 07:54:01.840857 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.840941 kubelet[2833]: W0710 07:54:01.840888 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.841073 kubelet[2833]: E0710 07:54:01.841042 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.841380 kubelet[2833]: E0710 07:54:01.841353 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.841440 kubelet[2833]: W0710 07:54:01.841382 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.841695 kubelet[2833]: E0710 07:54:01.841652 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.841833 kubelet[2833]: E0710 07:54:01.841805 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.841881 kubelet[2833]: W0710 07:54:01.841836 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.842009 kubelet[2833]: E0710 07:54:01.841950 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.842276 kubelet[2833]: E0710 07:54:01.842237 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.842276 kubelet[2833]: W0710 07:54:01.842270 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.842357 kubelet[2833]: E0710 07:54:01.842296 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.842874 kubelet[2833]: E0710 07:54:01.842835 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.842874 kubelet[2833]: W0710 07:54:01.842867 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.843031 kubelet[2833]: E0710 07:54:01.842921 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.844114 kubelet[2833]: E0710 07:54:01.844065 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.844114 kubelet[2833]: W0710 07:54:01.844111 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.844239 kubelet[2833]: E0710 07:54:01.844150 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.844561 kubelet[2833]: E0710 07:54:01.844519 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.844625 kubelet[2833]: W0710 07:54:01.844559 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.844625 kubelet[2833]: E0710 07:54:01.844595 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.844939 kubelet[2833]: E0710 07:54:01.844911 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.845009 kubelet[2833]: W0710 07:54:01.844941 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.845096 kubelet[2833]: E0710 07:54:01.845065 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.845423 kubelet[2833]: E0710 07:54:01.845392 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.845480 kubelet[2833]: W0710 07:54:01.845427 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.845480 kubelet[2833]: E0710 07:54:01.845452 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.845917 kubelet[2833]: E0710 07:54:01.845874 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.845917 kubelet[2833]: W0710 07:54:01.845912 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.846036 kubelet[2833]: E0710 07:54:01.845939 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:01.846749 kubelet[2833]: E0710 07:54:01.846706 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:01.846821 kubelet[2833]: W0710 07:54:01.846747 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:01.846821 kubelet[2833]: E0710 07:54:01.846772 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.557481 kubelet[2833]: E0710 07:54:02.556519 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:02.824385 kubelet[2833]: E0710 07:54:02.824216 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.825479 kubelet[2833]: W0710 07:54:02.825196 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.825479 kubelet[2833]: E0710 07:54:02.825297 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.825753 kubelet[2833]: E0710 07:54:02.825722 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.825855 kubelet[2833]: W0710 07:54:02.825841 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.825975 kubelet[2833]: E0710 07:54:02.825930 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.826302 kubelet[2833]: E0710 07:54:02.826290 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.826484 kubelet[2833]: W0710 07:54:02.826391 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.826484 kubelet[2833]: E0710 07:54:02.826407 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.826800 kubelet[2833]: E0710 07:54:02.826731 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.826800 kubelet[2833]: W0710 07:54:02.826743 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.826800 kubelet[2833]: E0710 07:54:02.826754 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.827219 kubelet[2833]: E0710 07:54:02.827160 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.827219 kubelet[2833]: W0710 07:54:02.827173 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.827219 kubelet[2833]: E0710 07:54:02.827183 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.827678 kubelet[2833]: E0710 07:54:02.827612 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.827678 kubelet[2833]: W0710 07:54:02.827626 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.827678 kubelet[2833]: E0710 07:54:02.827636 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.828007 kubelet[2833]: E0710 07:54:02.827924 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.828007 kubelet[2833]: W0710 07:54:02.827936 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.828007 kubelet[2833]: E0710 07:54:02.827946 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.828421 kubelet[2833]: E0710 07:54:02.828361 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.828582 kubelet[2833]: W0710 07:54:02.828373 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.828582 kubelet[2833]: E0710 07:54:02.828491 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.828836 kubelet[2833]: E0710 07:54:02.828779 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.828836 kubelet[2833]: W0710 07:54:02.828791 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.828836 kubelet[2833]: E0710 07:54:02.828801 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.829355 kubelet[2833]: E0710 07:54:02.829218 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.829355 kubelet[2833]: W0710 07:54:02.829230 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.829355 kubelet[2833]: E0710 07:54:02.829240 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.829700 kubelet[2833]: E0710 07:54:02.829602 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.829700 kubelet[2833]: W0710 07:54:02.829614 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.829700 kubelet[2833]: E0710 07:54:02.829656 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.829999 kubelet[2833]: E0710 07:54:02.829974 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.830141 kubelet[2833]: W0710 07:54:02.830076 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.830141 kubelet[2833]: E0710 07:54:02.830092 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.830448 kubelet[2833]: E0710 07:54:02.830383 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.830448 kubelet[2833]: W0710 07:54:02.830395 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.830448 kubelet[2833]: E0710 07:54:02.830406 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.830840 kubelet[2833]: E0710 07:54:02.830771 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.830840 kubelet[2833]: W0710 07:54:02.830783 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.830840 kubelet[2833]: E0710 07:54:02.830793 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.831211 kubelet[2833]: E0710 07:54:02.831198 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.831354 kubelet[2833]: W0710 07:54:02.831285 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.831354 kubelet[2833]: E0710 07:54:02.831302 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.846977 kubelet[2833]: E0710 07:54:02.846925 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.848074 kubelet[2833]: W0710 07:54:02.847084 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.848573 kubelet[2833]: E0710 07:54:02.848468 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.849329 kubelet[2833]: E0710 07:54:02.849284 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.849329 kubelet[2833]: W0710 07:54:02.849303 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.849329 kubelet[2833]: E0710 07:54:02.849319 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.849927 kubelet[2833]: E0710 07:54:02.849474 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.849927 kubelet[2833]: W0710 07:54:02.849483 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.849927 kubelet[2833]: E0710 07:54:02.849493 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.849927 kubelet[2833]: E0710 07:54:02.849724 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.849927 kubelet[2833]: W0710 07:54:02.849759 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.849927 kubelet[2833]: E0710 07:54:02.849772 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.850834 kubelet[2833]: E0710 07:54:02.850111 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.850834 kubelet[2833]: W0710 07:54:02.850123 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.850834 kubelet[2833]: E0710 07:54:02.850150 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.850834 kubelet[2833]: E0710 07:54:02.850327 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.850834 kubelet[2833]: W0710 07:54:02.850338 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.850834 kubelet[2833]: E0710 07:54:02.850348 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.851675 kubelet[2833]: E0710 07:54:02.851631 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.851675 kubelet[2833]: W0710 07:54:02.851661 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.851675 kubelet[2833]: E0710 07:54:02.851675 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.852991 kubelet[2833]: E0710 07:54:02.852892 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.852991 kubelet[2833]: W0710 07:54:02.852909 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.852991 kubelet[2833]: E0710 07:54:02.852920 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.853263 kubelet[2833]: E0710 07:54:02.853093 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.853263 kubelet[2833]: W0710 07:54:02.853103 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.853263 kubelet[2833]: E0710 07:54:02.853115 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.853853 kubelet[2833]: E0710 07:54:02.853283 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.853853 kubelet[2833]: W0710 07:54:02.853293 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.853853 kubelet[2833]: E0710 07:54:02.853310 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.853853 kubelet[2833]: E0710 07:54:02.853493 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.853853 kubelet[2833]: W0710 07:54:02.853502 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.853853 kubelet[2833]: E0710 07:54:02.853528 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.854824 kubelet[2833]: E0710 07:54:02.854038 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.854824 kubelet[2833]: W0710 07:54:02.854050 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.854824 kubelet[2833]: E0710 07:54:02.854071 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.854824 kubelet[2833]: E0710 07:54:02.854215 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.854824 kubelet[2833]: W0710 07:54:02.854224 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.854824 kubelet[2833]: E0710 07:54:02.854233 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.855703 kubelet[2833]: E0710 07:54:02.855305 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.856089 kubelet[2833]: W0710 07:54:02.855337 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.856089 kubelet[2833]: E0710 07:54:02.855902 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.856089 kubelet[2833]: E0710 07:54:02.856085 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.856089 kubelet[2833]: W0710 07:54:02.856097 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.856506 kubelet[2833]: E0710 07:54:02.856473 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.856728 kubelet[2833]: E0710 07:54:02.856641 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.856728 kubelet[2833]: W0710 07:54:02.856651 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.856728 kubelet[2833]: E0710 07:54:02.856664 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.857285 kubelet[2833]: E0710 07:54:02.857091 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.857285 kubelet[2833]: W0710 07:54:02.857102 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.857285 kubelet[2833]: E0710 07:54:02.857114 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:02.859095 kubelet[2833]: E0710 07:54:02.859044 2833 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 07:54:02.859095 kubelet[2833]: W0710 07:54:02.859060 2833 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 07:54:02.859095 kubelet[2833]: E0710 07:54:02.859072 2833 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 07:54:03.079361 containerd[1564]: time="2025-07-10T07:54:03.079175589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:03.083041 containerd[1564]: time="2025-07-10T07:54:03.082986238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 07:54:03.084865 containerd[1564]: time="2025-07-10T07:54:03.084792066Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:03.088244 containerd[1564]: time="2025-07-10T07:54:03.088171686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:03.090512 containerd[1564]: time="2025-07-10T07:54:03.090051302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.295943605s" Jul 10 07:54:03.090512 containerd[1564]: time="2025-07-10T07:54:03.090102759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 07:54:03.094994 containerd[1564]: time="2025-07-10T07:54:03.094917171Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 07:54:03.117010 containerd[1564]: time="2025-07-10T07:54:03.115313455Z" level=info msg="Container 2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:03.134850 containerd[1564]: time="2025-07-10T07:54:03.134783753Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\"" Jul 10 07:54:03.135632 containerd[1564]: time="2025-07-10T07:54:03.135365265Z" level=info msg="StartContainer for \"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\"" Jul 10 07:54:03.137984 containerd[1564]: time="2025-07-10T07:54:03.137905129Z" level=info msg="connecting to shim 2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f" address="unix:///run/containerd/s/3557534d67c0a5a7a5d5043935b5cdf3b9869598e5562430d8d53915dfdb63dc" protocol=ttrpc version=3 Jul 10 07:54:03.170140 systemd[1]: Started cri-containerd-2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f.scope - libcontainer container 2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f. Jul 10 07:54:03.248123 systemd[1]: cri-containerd-2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f.scope: Deactivated successfully. Jul 10 07:54:03.255400 containerd[1564]: time="2025-07-10T07:54:03.255316341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\" id:\"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\" pid:3541 exited_at:{seconds:1752134043 nanos:252871724}" Jul 10 07:54:03.332724 containerd[1564]: time="2025-07-10T07:54:03.332320085Z" level=info msg="received exit event container_id:\"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\" id:\"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\" pid:3541 exited_at:{seconds:1752134043 nanos:252871724}" Jul 10 07:54:03.343370 containerd[1564]: time="2025-07-10T07:54:03.343279305Z" level=info msg="StartContainer for \"2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f\" returns successfully" Jul 10 07:54:03.411910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2caf1609d43cc7f1a753bb29a25761165d735ac1df03e7a00900d0bff567dc4f-rootfs.mount: Deactivated successfully. Jul 10 07:54:04.554103 kubelet[2833]: E0710 07:54:04.553808 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:04.761872 containerd[1564]: time="2025-07-10T07:54:04.761710890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 07:54:06.554634 kubelet[2833]: E0710 07:54:06.553701 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:08.556018 kubelet[2833]: E0710 07:54:08.555495 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:10.553924 kubelet[2833]: E0710 07:54:10.553841 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:11.182143 containerd[1564]: time="2025-07-10T07:54:11.181896166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:11.189900 containerd[1564]: time="2025-07-10T07:54:11.189821681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 07:54:11.203900 containerd[1564]: time="2025-07-10T07:54:11.203771484Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:11.218422 containerd[1564]: time="2025-07-10T07:54:11.218309467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:11.222440 containerd[1564]: time="2025-07-10T07:54:11.221493586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.457839841s" Jul 10 07:54:11.222440 containerd[1564]: time="2025-07-10T07:54:11.221628961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 07:54:11.239954 containerd[1564]: time="2025-07-10T07:54:11.239845596Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 07:54:11.327077 containerd[1564]: time="2025-07-10T07:54:11.326848821Z" level=info msg="Container 3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:11.391720 containerd[1564]: time="2025-07-10T07:54:11.391615586Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\"" Jul 10 07:54:11.395390 containerd[1564]: time="2025-07-10T07:54:11.395284549Z" level=info msg="StartContainer for \"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\"" Jul 10 07:54:11.403092 containerd[1564]: time="2025-07-10T07:54:11.403017471Z" level=info msg="connecting to shim 3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a" address="unix:///run/containerd/s/3557534d67c0a5a7a5d5043935b5cdf3b9869598e5562430d8d53915dfdb63dc" protocol=ttrpc version=3 Jul 10 07:54:11.486554 systemd[1]: Started cri-containerd-3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a.scope - libcontainer container 3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a. Jul 10 07:54:11.599448 containerd[1564]: time="2025-07-10T07:54:11.596553914Z" level=info msg="StartContainer for \"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\" returns successfully" Jul 10 07:54:12.554586 kubelet[2833]: E0710 07:54:12.554474 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:14.554848 kubelet[2833]: E0710 07:54:14.554691 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:14.599143 containerd[1564]: time="2025-07-10T07:54:14.599010967Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 07:54:14.607175 systemd[1]: cri-containerd-3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a.scope: Deactivated successfully. Jul 10 07:54:14.608047 systemd[1]: cri-containerd-3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a.scope: Consumed 1.634s CPU time, 190.9M memory peak, 171.2M written to disk. Jul 10 07:54:14.610632 containerd[1564]: time="2025-07-10T07:54:14.610531927Z" level=info msg="received exit event container_id:\"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\" id:\"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\" pid:3598 exited_at:{seconds:1752134054 nanos:610000324}" Jul 10 07:54:14.611851 containerd[1564]: time="2025-07-10T07:54:14.611738121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\" id:\"3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a\" pid:3598 exited_at:{seconds:1752134054 nanos:610000324}" Jul 10 07:54:14.660917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c286b98716ec2ad54b73fcb0a0f48dca9b83c15fc52213b60e11919972f9e3a-rootfs.mount: Deactivated successfully. Jul 10 07:54:14.706364 kubelet[2833]: I0710 07:54:14.706309 2833 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 07:54:15.062838 systemd[1]: Created slice kubepods-burstable-poddff6e30e_e51d_4033_b909_dfa260eb3714.slice - libcontainer container kubepods-burstable-poddff6e30e_e51d_4033_b909_dfa260eb3714.slice. Jul 10 07:54:15.070998 kubelet[2833]: I0710 07:54:15.070262 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4lt\" (UniqueName: \"kubernetes.io/projected/a2d28237-8db2-489c-8a44-4db46a0ad1fd-kube-api-access-wp4lt\") pod \"calico-kube-controllers-7b9974968-qxv8n\" (UID: \"a2d28237-8db2-489c-8a44-4db46a0ad1fd\") " pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:15.071350 kubelet[2833]: I0710 07:54:15.071221 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7329202-feb1-456c-aeb4-f8cc1e636843-calico-apiserver-certs\") pod \"calico-apiserver-5cc4585dd9-wzfv4\" (UID: \"c7329202-feb1-456c-aeb4-f8cc1e636843\") " pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:15.071424 kubelet[2833]: I0710 07:54:15.071321 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff6e30e-e51d-4033-b909-dfa260eb3714-config-volume\") pod \"coredns-7c65d6cfc9-29tdv\" (UID: \"dff6e30e-e51d-4033-b909-dfa260eb3714\") " pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:15.074424 kubelet[2833]: I0710 07:54:15.074369 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf79b\" (UniqueName: \"kubernetes.io/projected/dff6e30e-e51d-4033-b909-dfa260eb3714-kube-api-access-xf79b\") pod \"coredns-7c65d6cfc9-29tdv\" (UID: \"dff6e30e-e51d-4033-b909-dfa260eb3714\") " pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:15.074550 kubelet[2833]: I0710 07:54:15.074436 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2d28237-8db2-489c-8a44-4db46a0ad1fd-tigera-ca-bundle\") pod \"calico-kube-controllers-7b9974968-qxv8n\" (UID: \"a2d28237-8db2-489c-8a44-4db46a0ad1fd\") " pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:15.074550 kubelet[2833]: I0710 07:54:15.074460 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nwd\" (UniqueName: \"kubernetes.io/projected/c7329202-feb1-456c-aeb4-f8cc1e636843-kube-api-access-84nwd\") pod \"calico-apiserver-5cc4585dd9-wzfv4\" (UID: \"c7329202-feb1-456c-aeb4-f8cc1e636843\") " pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:15.096312 systemd[1]: Created slice kubepods-besteffort-poda2d28237_8db2_489c_8a44_4db46a0ad1fd.slice - libcontainer container kubepods-besteffort-poda2d28237_8db2_489c_8a44_4db46a0ad1fd.slice. Jul 10 07:54:15.110557 kubelet[2833]: W0710 07:54:15.110069 2833 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object Jul 10 07:54:15.110557 kubelet[2833]: E0710 07:54:15.110226 2833 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object" logger="UnhandledError" Jul 10 07:54:15.110557 kubelet[2833]: W0710 07:54:15.110293 2833 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object Jul 10 07:54:15.110557 kubelet[2833]: E0710 07:54:15.110360 2833 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object" logger="UnhandledError" Jul 10 07:54:15.111130 kubelet[2833]: W0710 07:54:15.110452 2833 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object Jul 10 07:54:15.111130 kubelet[2833]: W0710 07:54:15.110480 2833 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object Jul 10 07:54:15.111130 kubelet[2833]: E0710 07:54:15.110482 2833 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object" logger="UnhandledError" Jul 10 07:54:15.111130 kubelet[2833]: E0710 07:54:15.110513 2833 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4391-0-0-n-fdb14ef6d8.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4391-0-0-n-fdb14ef6d8.novalocal' and this object" logger="UnhandledError" Jul 10 07:54:15.117898 systemd[1]: Created slice kubepods-besteffort-podc7329202_feb1_456c_aeb4_f8cc1e636843.slice - libcontainer container kubepods-besteffort-podc7329202_feb1_456c_aeb4_f8cc1e636843.slice. Jul 10 07:54:15.132867 systemd[1]: Created slice kubepods-besteffort-pod26a4a6c2_2cf2_4935_b48d_ff922d2b77ea.slice - libcontainer container kubepods-besteffort-pod26a4a6c2_2cf2_4935_b48d_ff922d2b77ea.slice. Jul 10 07:54:15.149040 systemd[1]: Created slice kubepods-besteffort-pod1d8d082c_2971_44ff_b422_ada7355b9814.slice - libcontainer container kubepods-besteffort-pod1d8d082c_2971_44ff_b422_ada7355b9814.slice. Jul 10 07:54:15.161146 systemd[1]: Created slice kubepods-besteffort-pod59d7f791_108d_48e9_b7b6_8c13bd0cf25b.slice - libcontainer container kubepods-besteffort-pod59d7f791_108d_48e9_b7b6_8c13bd0cf25b.slice. Jul 10 07:54:15.171551 systemd[1]: Created slice kubepods-burstable-pod81234764_a63b_4c8c_8451_6099c0eb34d1.slice - libcontainer container kubepods-burstable-pod81234764_a63b_4c8c_8451_6099c0eb34d1.slice. Jul 10 07:54:15.175913 kubelet[2833]: I0710 07:54:15.175628 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5649\" (UniqueName: \"kubernetes.io/projected/26a4a6c2-2cf2-4935-b48d-ff922d2b77ea-kube-api-access-k5649\") pod \"calico-apiserver-5cc4585dd9-xhp97\" (UID: \"26a4a6c2-2cf2-4935-b48d-ff922d2b77ea\") " pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:15.176150 kubelet[2833]: I0710 07:54:15.175929 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/59d7f791-108d-48e9-b7b6-8c13bd0cf25b-goldmane-key-pair\") pod \"goldmane-58fd7646b9-c7jh9\" (UID: \"59d7f791-108d-48e9-b7b6-8c13bd0cf25b\") " pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.176924 kubelet[2833]: I0710 07:54:15.176177 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4x2p\" (UniqueName: \"kubernetes.io/projected/1d8d082c-2971-44ff-b422-ada7355b9814-kube-api-access-d4x2p\") pod \"whisker-569ccbc76c-w6shp\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " pod="calico-system/whisker-569ccbc76c-w6shp" Jul 10 07:54:15.176924 kubelet[2833]: I0710 07:54:15.176531 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5js\" (UniqueName: \"kubernetes.io/projected/59d7f791-108d-48e9-b7b6-8c13bd0cf25b-kube-api-access-cl5js\") pod \"goldmane-58fd7646b9-c7jh9\" (UID: \"59d7f791-108d-48e9-b7b6-8c13bd0cf25b\") " pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.179016 kubelet[2833]: I0710 07:54:15.178109 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59d7f791-108d-48e9-b7b6-8c13bd0cf25b-config\") pod \"goldmane-58fd7646b9-c7jh9\" (UID: \"59d7f791-108d-48e9-b7b6-8c13bd0cf25b\") " pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.179016 kubelet[2833]: I0710 07:54:15.178234 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d7f791-108d-48e9-b7b6-8c13bd0cf25b-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-c7jh9\" (UID: \"59d7f791-108d-48e9-b7b6-8c13bd0cf25b\") " pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.179016 kubelet[2833]: I0710 07:54:15.178284 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tb7\" (UniqueName: \"kubernetes.io/projected/81234764-a63b-4c8c-8451-6099c0eb34d1-kube-api-access-q5tb7\") pod \"coredns-7c65d6cfc9-w9d5t\" (UID: \"81234764-a63b-4c8c-8451-6099c0eb34d1\") " pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:15.179016 kubelet[2833]: I0710 07:54:15.178347 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-backend-key-pair\") pod \"whisker-569ccbc76c-w6shp\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " pod="calico-system/whisker-569ccbc76c-w6shp" Jul 10 07:54:15.179016 kubelet[2833]: I0710 07:54:15.178425 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle\") pod \"whisker-569ccbc76c-w6shp\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " pod="calico-system/whisker-569ccbc76c-w6shp" Jul 10 07:54:15.179548 kubelet[2833]: I0710 07:54:15.178476 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26a4a6c2-2cf2-4935-b48d-ff922d2b77ea-calico-apiserver-certs\") pod \"calico-apiserver-5cc4585dd9-xhp97\" (UID: \"26a4a6c2-2cf2-4935-b48d-ff922d2b77ea\") " pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:15.179548 kubelet[2833]: I0710 07:54:15.178501 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81234764-a63b-4c8c-8451-6099c0eb34d1-config-volume\") pod \"coredns-7c65d6cfc9-w9d5t\" (UID: \"81234764-a63b-4c8c-8451-6099c0eb34d1\") " pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:15.385183 containerd[1564]: time="2025-07-10T07:54:15.385050228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:15.407323 containerd[1564]: time="2025-07-10T07:54:15.407263938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:15.470253 containerd[1564]: time="2025-07-10T07:54:15.470178493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c7jh9,Uid:59d7f791-108d-48e9-b7b6-8c13bd0cf25b,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:15.480890 containerd[1564]: time="2025-07-10T07:54:15.480843164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:15.528692 containerd[1564]: time="2025-07-10T07:54:15.528614642Z" level=error msg="Failed to destroy network for sandbox \"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.533409 containerd[1564]: time="2025-07-10T07:54:15.533331958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.534083 kubelet[2833]: E0710 07:54:15.534032 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.534332 kubelet[2833]: E0710 07:54:15.534305 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:15.534477 kubelet[2833]: E0710 07:54:15.534428 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:15.534726 containerd[1564]: time="2025-07-10T07:54:15.534603215Z" level=error msg="Failed to destroy network for sandbox \"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.534827 kubelet[2833]: E0710 07:54:15.534625 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b9974968-qxv8n_calico-system(a2d28237-8db2-489c-8a44-4db46a0ad1fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b9974968-qxv8n_calico-system(a2d28237-8db2-489c-8a44-4db46a0ad1fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5c8106f89502987dd692481ee11d5764f212266cace532af1d4a2838c3e9964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" podUID="a2d28237-8db2-489c-8a44-4db46a0ad1fd" Jul 10 07:54:15.537810 containerd[1564]: time="2025-07-10T07:54:15.537732186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.538624 kubelet[2833]: E0710 07:54:15.538190 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.539073 kubelet[2833]: E0710 07:54:15.538809 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:15.539073 kubelet[2833]: E0710 07:54:15.538851 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:15.539073 kubelet[2833]: E0710 07:54:15.538912 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-29tdv_kube-system(dff6e30e-e51d-4033-b909-dfa260eb3714)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-29tdv_kube-system(dff6e30e-e51d-4033-b909-dfa260eb3714)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a262fd0a8b97c3d9364051d723fdb3e0c829228bdce94b2d99fae0af48a85a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-29tdv" podUID="dff6e30e-e51d-4033-b909-dfa260eb3714" Jul 10 07:54:15.587487 containerd[1564]: time="2025-07-10T07:54:15.587424205Z" level=error msg="Failed to destroy network for sandbox \"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.589656 containerd[1564]: time="2025-07-10T07:54:15.589601381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.590505 kubelet[2833]: E0710 07:54:15.589907 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.590505 kubelet[2833]: E0710 07:54:15.590006 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:15.590505 kubelet[2833]: E0710 07:54:15.590032 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:15.592067 kubelet[2833]: E0710 07:54:15.590103 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w9d5t_kube-system(81234764-a63b-4c8c-8451-6099c0eb34d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w9d5t_kube-system(81234764-a63b-4c8c-8451-6099c0eb34d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f03b1a14c6879ca9663b654ef381db531f951a2861db128e6740e3871e04981\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w9d5t" podUID="81234764-a63b-4c8c-8451-6099c0eb34d1" Jul 10 07:54:15.592252 containerd[1564]: time="2025-07-10T07:54:15.589755131Z" level=error msg="Failed to destroy network for sandbox \"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.593421 containerd[1564]: time="2025-07-10T07:54:15.593154361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c7jh9,Uid:59d7f791-108d-48e9-b7b6-8c13bd0cf25b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.593581 kubelet[2833]: E0710 07:54:15.593516 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:15.593641 kubelet[2833]: E0710 07:54:15.593598 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.593641 kubelet[2833]: E0710 07:54:15.593624 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-c7jh9" Jul 10 07:54:15.593722 kubelet[2833]: E0710 07:54:15.593684 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-c7jh9_calico-system(59d7f791-108d-48e9-b7b6-8c13bd0cf25b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-c7jh9_calico-system(59d7f791-108d-48e9-b7b6-8c13bd0cf25b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57baa31252fb641fdd3f4c6414cd01c5c2f06b28ebd142f508fedac2cdff39ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-c7jh9" podUID="59d7f791-108d-48e9-b7b6-8c13bd0cf25b" Jul 10 07:54:15.854197 containerd[1564]: time="2025-07-10T07:54:15.854080457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 07:54:16.184946 kubelet[2833]: E0710 07:54:16.184718 2833 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 07:54:16.185925 kubelet[2833]: E0710 07:54:16.185709 2833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7329202-feb1-456c-aeb4-f8cc1e636843-calico-apiserver-certs podName:c7329202-feb1-456c-aeb4-f8cc1e636843 nodeName:}" failed. No retries permitted until 2025-07-10 07:54:16.685526057 +0000 UTC m=+38.396548176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c7329202-feb1-456c-aeb4-f8cc1e636843-calico-apiserver-certs") pod "calico-apiserver-5cc4585dd9-wzfv4" (UID: "c7329202-feb1-456c-aeb4-f8cc1e636843") : failed to sync secret cache: timed out waiting for the condition Jul 10 07:54:16.281045 kubelet[2833]: E0710 07:54:16.280824 2833 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 07:54:16.281397 kubelet[2833]: E0710 07:54:16.281365 2833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26a4a6c2-2cf2-4935-b48d-ff922d2b77ea-calico-apiserver-certs podName:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea nodeName:}" failed. No retries permitted until 2025-07-10 07:54:16.780930526 +0000 UTC m=+38.491952644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/26a4a6c2-2cf2-4935-b48d-ff922d2b77ea-calico-apiserver-certs") pod "calico-apiserver-5cc4585dd9-xhp97" (UID: "26a4a6c2-2cf2-4935-b48d-ff922d2b77ea") : failed to sync secret cache: timed out waiting for the condition Jul 10 07:54:16.287460 kubelet[2833]: E0710 07:54:16.287173 2833 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 07:54:16.287460 kubelet[2833]: E0710 07:54:16.287352 2833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle podName:1d8d082c-2971-44ff-b422-ada7355b9814 nodeName:}" failed. No retries permitted until 2025-07-10 07:54:16.787316797 +0000 UTC m=+38.498338915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle") pod "whisker-569ccbc76c-w6shp" (UID: "1d8d082c-2971-44ff-b422-ada7355b9814") : failed to sync configmap cache: timed out waiting for the condition Jul 10 07:54:16.585068 systemd[1]: Created slice kubepods-besteffort-pod6b536d42_7d30_4147_a6e2_348f9b0f4c7a.slice - libcontainer container kubepods-besteffort-pod6b536d42_7d30_4147_a6e2_348f9b0f4c7a.slice. Jul 10 07:54:16.592328 containerd[1564]: time="2025-07-10T07:54:16.592228437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwddw,Uid:6b536d42-7d30-4147-a6e2-348f9b0f4c7a,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:16.699805 containerd[1564]: time="2025-07-10T07:54:16.699745543Z" level=error msg="Failed to destroy network for sandbox \"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:16.704879 containerd[1564]: time="2025-07-10T07:54:16.704831413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwddw,Uid:6b536d42-7d30-4147-a6e2-348f9b0f4c7a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:16.707060 systemd[1]: run-netns-cni\x2d4273b36f\x2d7902\x2df60c\x2def51\x2df8f3761ab77a.mount: Deactivated successfully. Jul 10 07:54:16.708006 kubelet[2833]: E0710 07:54:16.707401 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:16.708006 kubelet[2833]: E0710 07:54:16.707485 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pwddw" Jul 10 07:54:16.708006 kubelet[2833]: E0710 07:54:16.707510 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pwddw" Jul 10 07:54:16.708381 kubelet[2833]: E0710 07:54:16.707565 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pwddw_calico-system(6b536d42-7d30-4147-a6e2-348f9b0f4c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pwddw_calico-system(6b536d42-7d30-4147-a6e2-348f9b0f4c7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8215d023a85da4ac91267bc0443a4b985b06005236e723b45985ece0cf51cbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pwddw" podUID="6b536d42-7d30-4147-a6e2-348f9b0f4c7a" Jul 10 07:54:16.931985 containerd[1564]: time="2025-07-10T07:54:16.931689711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:16.951944 containerd[1564]: time="2025-07-10T07:54:16.951809467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:16.963023 containerd[1564]: time="2025-07-10T07:54:16.962522437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569ccbc76c-w6shp,Uid:1d8d082c-2971-44ff-b422-ada7355b9814,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:17.081848 containerd[1564]: time="2025-07-10T07:54:17.081768297Z" level=error msg="Failed to destroy network for sandbox \"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.087577 containerd[1564]: time="2025-07-10T07:54:17.087515873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-569ccbc76c-w6shp,Uid:1d8d082c-2971-44ff-b422-ada7355b9814,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.088669 kubelet[2833]: E0710 07:54:17.088384 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.088669 kubelet[2833]: E0710 07:54:17.088488 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-569ccbc76c-w6shp" Jul 10 07:54:17.088669 kubelet[2833]: E0710 07:54:17.088514 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-569ccbc76c-w6shp" Jul 10 07:54:17.089269 kubelet[2833]: E0710 07:54:17.089211 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-569ccbc76c-w6shp_calico-system(1d8d082c-2971-44ff-b422-ada7355b9814)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-569ccbc76c-w6shp_calico-system(1d8d082c-2971-44ff-b422-ada7355b9814)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b242aacf404b4b895532daab2b0f91822eee9fdc29326906b0fb50c609668c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-569ccbc76c-w6shp" podUID="1d8d082c-2971-44ff-b422-ada7355b9814" Jul 10 07:54:17.119271 containerd[1564]: time="2025-07-10T07:54:17.119175349Z" level=error msg="Failed to destroy network for sandbox \"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.122361 containerd[1564]: time="2025-07-10T07:54:17.122306933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.122907 kubelet[2833]: E0710 07:54:17.122774 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.123117 kubelet[2833]: E0710 07:54:17.123088 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:17.123195 kubelet[2833]: E0710 07:54:17.123123 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:17.123483 kubelet[2833]: E0710 07:54:17.123241 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc4585dd9-xhp97_calico-apiserver(26a4a6c2-2cf2-4935-b48d-ff922d2b77ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc4585dd9-xhp97_calico-apiserver(26a4a6c2-2cf2-4935-b48d-ff922d2b77ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a20342af97a6e461b5399908408a21fd571fa7b10abf2060f63aff86c6dad96a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" podUID="26a4a6c2-2cf2-4935-b48d-ff922d2b77ea" Jul 10 07:54:17.132344 containerd[1564]: time="2025-07-10T07:54:17.132279043Z" level=error msg="Failed to destroy network for sandbox \"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.134000 containerd[1564]: time="2025-07-10T07:54:17.133926849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.134429 kubelet[2833]: E0710 07:54:17.134383 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:17.134492 kubelet[2833]: E0710 07:54:17.134450 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:17.134492 kubelet[2833]: E0710 07:54:17.134474 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:17.134570 kubelet[2833]: E0710 07:54:17.134533 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc4585dd9-wzfv4_calico-apiserver(c7329202-feb1-456c-aeb4-f8cc1e636843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc4585dd9-wzfv4_calico-apiserver(c7329202-feb1-456c-aeb4-f8cc1e636843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca1732f228f4c4a6220c30c72e13ec4ca9eaf9b70e1e247b5792c896c9c972e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" podUID="c7329202-feb1-456c-aeb4-f8cc1e636843" Jul 10 07:54:17.709247 systemd[1]: run-netns-cni\x2d13d2fc14\x2d8b5a\x2d0b18\x2dff28\x2d42671a022c9e.mount: Deactivated successfully. Jul 10 07:54:17.709506 systemd[1]: run-netns-cni\x2d0a8b9fb2\x2dbdea\x2dfd45\x2d5c15\x2dd58432a62f3c.mount: Deactivated successfully. Jul 10 07:54:27.555715 containerd[1564]: time="2025-07-10T07:54:27.555130758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:27.733122 containerd[1564]: time="2025-07-10T07:54:27.733051263Z" level=error msg="Failed to destroy network for sandbox \"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:27.738469 systemd[1]: run-netns-cni\x2d78025648\x2dfef5\x2df3ed\x2d598f\x2d32bd8706ea1b.mount: Deactivated successfully. Jul 10 07:54:27.741053 containerd[1564]: time="2025-07-10T07:54:27.740749830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:27.741993 kubelet[2833]: E0710 07:54:27.741204 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:27.741993 kubelet[2833]: E0710 07:54:27.741387 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:27.741993 kubelet[2833]: E0710 07:54:27.741449 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-29tdv" Jul 10 07:54:27.745254 kubelet[2833]: E0710 07:54:27.741882 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-29tdv_kube-system(dff6e30e-e51d-4033-b909-dfa260eb3714)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-29tdv_kube-system(dff6e30e-e51d-4033-b909-dfa260eb3714)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5498ef7693a8c2d355672c7fed932d0429dee71ee89a31184f8a852240119c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-29tdv" podUID="dff6e30e-e51d-4033-b909-dfa260eb3714" Jul 10 07:54:28.576521 containerd[1564]: time="2025-07-10T07:54:28.576422909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:28.821086 containerd[1564]: time="2025-07-10T07:54:28.820634143Z" level=error msg="Failed to destroy network for sandbox \"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:28.824016 containerd[1564]: time="2025-07-10T07:54:28.822664516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:28.824130 kubelet[2833]: E0710 07:54:28.822942 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:28.824130 kubelet[2833]: E0710 07:54:28.823044 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:28.824130 kubelet[2833]: E0710 07:54:28.823079 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" Jul 10 07:54:28.824530 kubelet[2833]: E0710 07:54:28.823131 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc4585dd9-xhp97_calico-apiserver(26a4a6c2-2cf2-4935-b48d-ff922d2b77ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc4585dd9-xhp97_calico-apiserver(26a4a6c2-2cf2-4935-b48d-ff922d2b77ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc3b35247f36d0f3c05d40b0969a7cff9101c7c51da88d84ee3edecbc7d7aaab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" podUID="26a4a6c2-2cf2-4935-b48d-ff922d2b77ea" Jul 10 07:54:28.828224 systemd[1]: run-netns-cni\x2d2c840829\x2d6cc9\x2d5d49\x2df57b\x2d6cc0d1d93f90.mount: Deactivated successfully. Jul 10 07:54:29.539709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325078718.mount: Deactivated successfully. Jul 10 07:54:29.554297 containerd[1564]: time="2025-07-10T07:54:29.554227574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:29.554506 containerd[1564]: time="2025-07-10T07:54:29.554450414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:29.555167 containerd[1564]: time="2025-07-10T07:54:29.555127859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:29.833995 containerd[1564]: time="2025-07-10T07:54:29.832666553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 07:54:29.833995 containerd[1564]: time="2025-07-10T07:54:29.833685822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:29.837488 containerd[1564]: time="2025-07-10T07:54:29.837440821Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:29.838451 containerd[1564]: time="2025-07-10T07:54:29.838401660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.98424574s" Jul 10 07:54:29.838578 containerd[1564]: time="2025-07-10T07:54:29.838560599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 07:54:29.839210 containerd[1564]: time="2025-07-10T07:54:29.839183933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:29.876008 containerd[1564]: time="2025-07-10T07:54:29.875931425Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 07:54:29.910179 containerd[1564]: time="2025-07-10T07:54:29.910108017Z" level=info msg="Container 7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:29.958937 containerd[1564]: time="2025-07-10T07:54:29.958877612Z" level=info msg="CreateContainer within sandbox \"8ab10e788247828b6799c14bfc58611bff4888dc355282475d0554991619e1ff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\"" Jul 10 07:54:29.961338 containerd[1564]: time="2025-07-10T07:54:29.961304311Z" level=info msg="StartContainer for \"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\"" Jul 10 07:54:29.968301 containerd[1564]: time="2025-07-10T07:54:29.968244467Z" level=info msg="connecting to shim 7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5" address="unix:///run/containerd/s/3557534d67c0a5a7a5d5043935b5cdf3b9869598e5562430d8d53915dfdb63dc" protocol=ttrpc version=3 Jul 10 07:54:30.027140 containerd[1564]: time="2025-07-10T07:54:30.026977916Z" level=error msg="Failed to destroy network for sandbox \"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.029797 containerd[1564]: time="2025-07-10T07:54:30.029674572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.030754 kubelet[2833]: E0710 07:54:30.030621 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.032034 kubelet[2833]: E0710 07:54:30.030752 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:30.032034 kubelet[2833]: E0710 07:54:30.030780 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9d5t" Jul 10 07:54:30.032034 kubelet[2833]: E0710 07:54:30.030833 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w9d5t_kube-system(81234764-a63b-4c8c-8451-6099c0eb34d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w9d5t_kube-system(81234764-a63b-4c8c-8451-6099c0eb34d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"216b4b141422334d00a32da3bdffa96facbfa7b0a456ecc4e0ac2e30a0206730\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w9d5t" podUID="81234764-a63b-4c8c-8451-6099c0eb34d1" Jul 10 07:54:30.039488 containerd[1564]: time="2025-07-10T07:54:30.039410650Z" level=error msg="Failed to destroy network for sandbox \"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.042732 containerd[1564]: time="2025-07-10T07:54:30.042615312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.044827 kubelet[2833]: E0710 07:54:30.044534 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.044827 kubelet[2833]: E0710 07:54:30.044619 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:30.044827 kubelet[2833]: E0710 07:54:30.044695 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" Jul 10 07:54:30.047081 kubelet[2833]: E0710 07:54:30.044769 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b9974968-qxv8n_calico-system(a2d28237-8db2-489c-8a44-4db46a0ad1fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b9974968-qxv8n_calico-system(a2d28237-8db2-489c-8a44-4db46a0ad1fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6d355c3c097b4b8f51e1305a2380717bfc176ab86959912df9ddf17788ce337\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" podUID="a2d28237-8db2-489c-8a44-4db46a0ad1fd" Jul 10 07:54:30.069981 containerd[1564]: time="2025-07-10T07:54:30.069887381Z" level=error msg="Failed to destroy network for sandbox \"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.072530 containerd[1564]: time="2025-07-10T07:54:30.072441349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.073824 kubelet[2833]: E0710 07:54:30.072977 2833 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 07:54:30.073914 kubelet[2833]: E0710 07:54:30.073844 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:30.073914 kubelet[2833]: E0710 07:54:30.073870 2833 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" Jul 10 07:54:30.074854 kubelet[2833]: E0710 07:54:30.073931 2833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc4585dd9-wzfv4_calico-apiserver(c7329202-feb1-456c-aeb4-f8cc1e636843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc4585dd9-wzfv4_calico-apiserver(c7329202-feb1-456c-aeb4-f8cc1e636843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8b11b3054d71e36eac86ca019d8268ce5030ec1a77b153c4cb8eab2fbd392d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" podUID="c7329202-feb1-456c-aeb4-f8cc1e636843" Jul 10 07:54:30.080184 systemd[1]: Started cri-containerd-7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5.scope - libcontainer container 7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5. Jul 10 07:54:30.142907 containerd[1564]: time="2025-07-10T07:54:30.141492424Z" level=info msg="StartContainer for \"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" returns successfully" Jul 10 07:54:30.306312 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 07:54:30.306523 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 07:54:30.556075 containerd[1564]: time="2025-07-10T07:54:30.555227643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwddw,Uid:6b536d42-7d30-4147-a6e2-348f9b0f4c7a,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:30.556454 containerd[1564]: time="2025-07-10T07:54:30.556406903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c7jh9,Uid:59d7f791-108d-48e9-b7b6-8c13bd0cf25b,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:30.633943 kubelet[2833]: I0710 07:54:30.633885 2833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-backend-key-pair\") pod \"1d8d082c-2971-44ff-b422-ada7355b9814\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " Jul 10 07:54:30.633943 kubelet[2833]: I0710 07:54:30.633939 2833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4x2p\" (UniqueName: \"kubernetes.io/projected/1d8d082c-2971-44ff-b422-ada7355b9814-kube-api-access-d4x2p\") pod \"1d8d082c-2971-44ff-b422-ada7355b9814\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " Jul 10 07:54:30.636533 kubelet[2833]: I0710 07:54:30.635178 2833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle\") pod \"1d8d082c-2971-44ff-b422-ada7355b9814\" (UID: \"1d8d082c-2971-44ff-b422-ada7355b9814\") " Jul 10 07:54:30.636533 kubelet[2833]: I0710 07:54:30.635806 2833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1d8d082c-2971-44ff-b422-ada7355b9814" (UID: "1d8d082c-2971-44ff-b422-ada7355b9814"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 07:54:30.654382 kubelet[2833]: I0710 07:54:30.653275 2833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1d8d082c-2971-44ff-b422-ada7355b9814" (UID: "1d8d082c-2971-44ff-b422-ada7355b9814"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 07:54:30.654382 kubelet[2833]: I0710 07:54:30.653821 2833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8d082c-2971-44ff-b422-ada7355b9814-kube-api-access-d4x2p" (OuterVolumeSpecName: "kube-api-access-d4x2p") pod "1d8d082c-2971-44ff-b422-ada7355b9814" (UID: "1d8d082c-2971-44ff-b422-ada7355b9814"). InnerVolumeSpecName "kube-api-access-d4x2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 07:54:30.737874 kubelet[2833]: I0710 07:54:30.737083 2833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4x2p\" (UniqueName: \"kubernetes.io/projected/1d8d082c-2971-44ff-b422-ada7355b9814-kube-api-access-d4x2p\") on node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" DevicePath \"\"" Jul 10 07:54:30.737874 kubelet[2833]: I0710 07:54:30.737148 2833 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-backend-key-pair\") on node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" DevicePath \"\"" Jul 10 07:54:30.737874 kubelet[2833]: I0710 07:54:30.737164 2833 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8d082c-2971-44ff-b422-ada7355b9814-whisker-ca-bundle\") on node \"ci-4391-0-0-n-fdb14ef6d8.novalocal\" DevicePath \"\"" Jul 10 07:54:30.747924 systemd[1]: run-netns-cni\x2d7c78d05a\x2d5d6b\x2d0614\x2d1b47\x2da7b04209e0f5.mount: Deactivated successfully. Jul 10 07:54:30.749104 systemd[1]: run-netns-cni\x2df6db2111\x2d2858\x2dfa39\x2d1216\x2d52e7f3b80f4a.mount: Deactivated successfully. Jul 10 07:54:30.749181 systemd[1]: run-netns-cni\x2d4ce4cd74\x2d2904\x2d4dec\x2d4be5\x2d0509613cd921.mount: Deactivated successfully. Jul 10 07:54:30.749274 systemd[1]: var-lib-kubelet-pods-1d8d082c\x2d2971\x2d44ff\x2db422\x2dada7355b9814-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 07:54:30.749366 systemd[1]: var-lib-kubelet-pods-1d8d082c\x2d2971\x2d44ff\x2db422\x2dada7355b9814-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd4x2p.mount: Deactivated successfully. Jul 10 07:54:30.978727 systemd[1]: Removed slice kubepods-besteffort-pod1d8d082c_2971_44ff_b422_ada7355b9814.slice - libcontainer container kubepods-besteffort-pod1d8d082c_2971_44ff_b422_ada7355b9814.slice. Jul 10 07:54:30.992929 kubelet[2833]: I0710 07:54:30.992825 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gnx7g" podStartSLOduration=2.348889679 podStartE2EDuration="34.99276594s" podCreationTimestamp="2025-07-10 07:53:56 +0000 UTC" firstStartedPulling="2025-07-10 07:53:57.197220744 +0000 UTC m=+18.908242812" lastFinishedPulling="2025-07-10 07:54:29.841097004 +0000 UTC m=+51.552119073" observedRunningTime="2025-07-10 07:54:30.992186189 +0000 UTC m=+52.703208257" watchObservedRunningTime="2025-07-10 07:54:30.99276594 +0000 UTC m=+52.703788008" Jul 10 07:54:31.080515 systemd-networkd[1452]: cali3cf2d4d343c: Link UP Jul 10 07:54:31.080786 systemd-networkd[1452]: cali3cf2d4d343c: Gained carrier Jul 10 07:54:31.119566 containerd[1564]: 2025-07-10 07:54:30.610 [INFO][4042] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 07:54:31.119566 containerd[1564]: 2025-07-10 07:54:30.718 [INFO][4042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0 goldmane-58fd7646b9- calico-system 59d7f791-108d-48e9-b7b6-8c13bd0cf25b 855 0 2025-07-10 07:53:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal goldmane-58fd7646b9-c7jh9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3cf2d4d343c [] [] }} ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-" Jul 10 07:54:31.119566 containerd[1564]: 2025-07-10 07:54:30.719 [INFO][4042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.119566 containerd[1564]: 2025-07-10 07:54:30.867 [INFO][4067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" HandleID="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.868 [INFO][4067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" HandleID="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"goldmane-58fd7646b9-c7jh9", "timestamp":"2025-07-10 07:54:30.867377852 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.868 [INFO][4067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.868 [INFO][4067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.869 [INFO][4067] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.885 [INFO][4067] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.894 [INFO][4067] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.907 [INFO][4067] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.911 [INFO][4067] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121324 containerd[1564]: 2025-07-10 07:54:30.914 [INFO][4067] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121627 containerd[1564]: 2025-07-10 07:54:30.914 [INFO][4067] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121627 containerd[1564]: 2025-07-10 07:54:30.916 [INFO][4067] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538 Jul 10 07:54:31.121627 containerd[1564]: 2025-07-10 07:54:30.930 [INFO][4067] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:30.944 [ERROR][4067] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-83-0-26) Name="192-168-83-0-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-83-0-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.83.0/26", Affinity:(*string)(0xc000332b40), Allocations:[]*int{(*int)(0xc0005fd338), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d5a20), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"goldmane-58fd7646b9-c7jh9", "timestamp":"2025-07-10 07:54:30.867377852 +0000 UTC"}}}, SequenceNumber:0x1850d4a77c1f8e17, SequenceNumberForAllocation:map[string]uint64{"0":0x1850d4a77c1f8e16}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-83-0-26": the object has been modified; please apply your changes to the latest version and try again Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:30.944 [INFO][4067] ipam/ipam.go 1247: Failed to update block block=192.168.83.0/26 error=update conflict: IPAMBlock(192-168-83-0-26) handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.020 [INFO][4067] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.025 [INFO][4067] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538 Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.040 [INFO][4067] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.053 [INFO][4067] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.1/26] block=192.168.83.0/26 handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.053 [INFO][4067] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.1/26] handle="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.053 [INFO][4067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:31.121709 containerd[1564]: 2025-07-10 07:54:31.053 [INFO][4067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.1/26] IPv6=[] ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" HandleID="k8s-pod-network.dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.060 [INFO][4042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"59d7f791-108d-48e9-b7b6-8c13bd0cf25b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"goldmane-58fd7646b9-c7jh9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3cf2d4d343c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.060 [INFO][4042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.1/32] ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.060 [INFO][4042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cf2d4d343c ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.084 [INFO][4042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.087 [INFO][4042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"59d7f791-108d-48e9-b7b6-8c13bd0cf25b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538", Pod:"goldmane-58fd7646b9-c7jh9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3cf2d4d343c", MAC:"1e:b8:c8:47:7d:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.123845 containerd[1564]: 2025-07-10 07:54:31.115 [INFO][4042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" Namespace="calico-system" Pod="goldmane-58fd7646b9-c7jh9" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-goldmane--58fd7646b9--c7jh9-eth0" Jul 10 07:54:31.143449 systemd[1]: Created slice kubepods-besteffort-pod1b34bed9_c30c_47e6_b816_66772f55d778.slice - libcontainer container kubepods-besteffort-pod1b34bed9_c30c_47e6_b816_66772f55d778.slice. Jul 10 07:54:31.208669 containerd[1564]: time="2025-07-10T07:54:31.208581257Z" level=info msg="connecting to shim dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538" address="unix:///run/containerd/s/6fc4ced6031340f85a39760fff7963e0a27a92d2214b95fb366042bd61f8c510" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:31.242268 systemd-networkd[1452]: cali591694f5faf: Link UP Jul 10 07:54:31.246373 kubelet[2833]: I0710 07:54:31.243136 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1b34bed9-c30c-47e6-b816-66772f55d778-whisker-backend-key-pair\") pod \"whisker-6c7cdc94b5-gnt27\" (UID: \"1b34bed9-c30c-47e6-b816-66772f55d778\") " pod="calico-system/whisker-6c7cdc94b5-gnt27" Jul 10 07:54:31.246373 kubelet[2833]: I0710 07:54:31.243198 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b34bed9-c30c-47e6-b816-66772f55d778-whisker-ca-bundle\") pod \"whisker-6c7cdc94b5-gnt27\" (UID: \"1b34bed9-c30c-47e6-b816-66772f55d778\") " pod="calico-system/whisker-6c7cdc94b5-gnt27" Jul 10 07:54:31.246373 kubelet[2833]: I0710 07:54:31.243226 2833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrhr\" (UniqueName: \"kubernetes.io/projected/1b34bed9-c30c-47e6-b816-66772f55d778-kube-api-access-9xrhr\") pod \"whisker-6c7cdc94b5-gnt27\" (UID: \"1b34bed9-c30c-47e6-b816-66772f55d778\") " pod="calico-system/whisker-6c7cdc94b5-gnt27" Jul 10 07:54:31.248402 systemd-networkd[1452]: cali591694f5faf: Gained carrier Jul 10 07:54:31.284544 systemd[1]: Started cri-containerd-dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538.scope - libcontainer container dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538. Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.669 [INFO][4046] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.719 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0 csi-node-driver- calico-system 6b536d42-7d30-4147-a6e2-348f9b0f4c7a 697 0 2025-07-10 07:53:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal csi-node-driver-pwddw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali591694f5faf [] [] }} ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.719 [INFO][4046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.869 [INFO][4069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" HandleID="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.869 [INFO][4069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" HandleID="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"csi-node-driver-pwddw", "timestamp":"2025-07-10 07:54:30.869471744 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:30.869 [INFO][4069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.054 [INFO][4069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.054 [INFO][4069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.091 [INFO][4069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.126 [INFO][4069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.174 [INFO][4069] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.181 [INFO][4069] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.187 [INFO][4069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.187 [INFO][4069] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.194 [INFO][4069] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491 Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.204 [INFO][4069] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.215 [INFO][4069] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.2/26] block=192.168.83.0/26 handle="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.217 [INFO][4069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.2/26] handle="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.218 [INFO][4069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:31.287680 containerd[1564]: 2025-07-10 07:54:31.220 [INFO][4069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.2/26] IPv6=[] ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" HandleID="k8s-pod-network.46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.229 [INFO][4046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b536d42-7d30-4147-a6e2-348f9b0f4c7a", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"csi-node-driver-pwddw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali591694f5faf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.232 [INFO][4046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.2/32] ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.232 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali591694f5faf ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.250 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.251 [INFO][4046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b536d42-7d30-4147-a6e2-348f9b0f4c7a", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491", Pod:"csi-node-driver-pwddw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali591694f5faf", MAC:"d2:b9:37:06:11:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.288831 containerd[1564]: 2025-07-10 07:54:31.280 [INFO][4046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" Namespace="calico-system" Pod="csi-node-driver-pwddw" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-csi--node--driver--pwddw-eth0" Jul 10 07:54:31.337568 containerd[1564]: time="2025-07-10T07:54:31.337501011Z" level=info msg="connecting to shim 46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491" address="unix:///run/containerd/s/b82374878b18353f2caec4c1af2a8e0bc89b1da1408146a03a73559b69f1aef6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:31.386607 containerd[1564]: time="2025-07-10T07:54:31.386538043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"06c454a7235fa3fc054496f3aba9edd0325e4f865ab09110db25572fb289d32d\" pid:4093 exit_status:1 exited_at:{seconds:1752134071 nanos:386047039}" Jul 10 07:54:31.412008 systemd[1]: Started cri-containerd-46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491.scope - libcontainer container 46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491. Jul 10 07:54:31.418950 containerd[1564]: time="2025-07-10T07:54:31.418902502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c7jh9,Uid:59d7f791-108d-48e9-b7b6-8c13bd0cf25b,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538\"" Jul 10 07:54:31.421300 containerd[1564]: time="2025-07-10T07:54:31.421186942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 07:54:31.452749 containerd[1564]: time="2025-07-10T07:54:31.452697644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7cdc94b5-gnt27,Uid:1b34bed9-c30c-47e6-b816-66772f55d778,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:31.453426 containerd[1564]: time="2025-07-10T07:54:31.452916826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pwddw,Uid:6b536d42-7d30-4147-a6e2-348f9b0f4c7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491\"" Jul 10 07:54:31.617348 systemd-networkd[1452]: cali6c737ff91e4: Link UP Jul 10 07:54:31.617670 systemd-networkd[1452]: cali6c737ff91e4: Gained carrier Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.486 [INFO][4226] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.502 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0 whisker-6c7cdc94b5- calico-system 1b34bed9-c30c-47e6-b816-66772f55d778 951 0 2025-07-10 07:54:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c7cdc94b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal whisker-6c7cdc94b5-gnt27 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6c737ff91e4 [] [] }} ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.502 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.533 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" HandleID="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.533 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" HandleID="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"whisker-6c7cdc94b5-gnt27", "timestamp":"2025-07-10 07:54:31.533493575 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.533 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.533 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.533 [INFO][4237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.547 [INFO][4237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.553 [INFO][4237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.573 [INFO][4237] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.578 [INFO][4237] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.581 [INFO][4237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.581 [INFO][4237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.583 [INFO][4237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957 Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.589 [INFO][4237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.600 [INFO][4237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.3/26] block=192.168.83.0/26 handle="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.600 [INFO][4237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.3/26] handle="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.600 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:31.643989 containerd[1564]: 2025-07-10 07:54:31.600 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.3/26] IPv6=[] ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" HandleID="k8s-pod-network.8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.605 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0", GenerateName:"whisker-6c7cdc94b5-", Namespace:"calico-system", SelfLink:"", UID:"1b34bed9-c30c-47e6-b816-66772f55d778", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7cdc94b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"whisker-6c7cdc94b5-gnt27", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c737ff91e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.606 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.3/32] ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.606 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c737ff91e4 ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.618 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.620 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0", GenerateName:"whisker-6c7cdc94b5-", Namespace:"calico-system", SelfLink:"", UID:"1b34bed9-c30c-47e6-b816-66772f55d778", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 54, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7cdc94b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957", Pod:"whisker-6c7cdc94b5-gnt27", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c737ff91e4", MAC:"ea:25:4e:84:55:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:31.645020 containerd[1564]: 2025-07-10 07:54:31.641 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" Namespace="calico-system" Pod="whisker-6c7cdc94b5-gnt27" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-whisker--6c7cdc94b5--gnt27-eth0" Jul 10 07:54:31.690127 containerd[1564]: time="2025-07-10T07:54:31.689945813Z" level=info msg="connecting to shim 8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957" address="unix:///run/containerd/s/ed819b482db7e6e256ac72e9d1b7bf740c79961a60746cccfa79485e3e9db552" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:31.729297 systemd[1]: Started cri-containerd-8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957.scope - libcontainer container 8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957. Jul 10 07:54:31.811776 containerd[1564]: time="2025-07-10T07:54:31.811717973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7cdc94b5-gnt27,Uid:1b34bed9-c30c-47e6-b816-66772f55d778,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957\"" Jul 10 07:54:32.114412 containerd[1564]: time="2025-07-10T07:54:32.114296874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"8fb09738283c8a79e8340c7aeb65c564a66733407144e5a646e16dddd24a5c83\" pid:4307 exit_status:1 exited_at:{seconds:1752134072 nanos:113322850}" Jul 10 07:54:32.425982 systemd-networkd[1452]: cali591694f5faf: Gained IPv6LL Jul 10 07:54:32.552351 systemd-networkd[1452]: cali3cf2d4d343c: Gained IPv6LL Jul 10 07:54:32.567772 kubelet[2833]: I0710 07:54:32.567491 2833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8d082c-2971-44ff-b422-ada7355b9814" path="/var/lib/kubelet/pods/1d8d082c-2971-44ff-b422-ada7355b9814/volumes" Jul 10 07:54:33.433912 systemd-networkd[1452]: vxlan.calico: Link UP Jul 10 07:54:33.433922 systemd-networkd[1452]: vxlan.calico: Gained carrier Jul 10 07:54:33.512120 systemd-networkd[1452]: cali6c737ff91e4: Gained IPv6LL Jul 10 07:54:34.536808 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Jul 10 07:54:38.135204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000904417.mount: Deactivated successfully. Jul 10 07:54:38.559010 containerd[1564]: time="2025-07-10T07:54:38.558694266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:38.881367 systemd-networkd[1452]: calib3564c56e56: Link UP Jul 10 07:54:38.884747 systemd-networkd[1452]: calib3564c56e56: Gained carrier Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.691 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0 coredns-7c65d6cfc9- kube-system dff6e30e-e51d-4033-b909-dfa260eb3714 846 0 2025-07-10 07:53:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal coredns-7c65d6cfc9-29tdv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib3564c56e56 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.693 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.783 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" HandleID="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.784 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" HandleID="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"coredns-7c65d6cfc9-29tdv", "timestamp":"2025-07-10 07:54:38.783197567 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.784 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.785 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.785 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.801 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.812 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.821 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.826 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.831 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.831 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.836 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198 Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.845 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.862 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.4/26] block=192.168.83.0/26 handle="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.862 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.4/26] handle="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.862 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:38.926499 containerd[1564]: 2025-07-10 07:54:38.862 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.4/26] IPv6=[] ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" HandleID="k8s-pod-network.65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.868 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dff6e30e-e51d-4033-b909-dfa260eb3714", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-29tdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3564c56e56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.869 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.4/32] ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.869 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3564c56e56 ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.889 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.890 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dff6e30e-e51d-4033-b909-dfa260eb3714", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198", Pod:"coredns-7c65d6cfc9-29tdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3564c56e56", MAC:"72:24:c3:28:4d:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:38.928771 containerd[1564]: 2025-07-10 07:54:38.920 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" Namespace="kube-system" Pod="coredns-7c65d6cfc9-29tdv" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--29tdv-eth0" Jul 10 07:54:39.023819 containerd[1564]: time="2025-07-10T07:54:39.023757812Z" level=info msg="connecting to shim 65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198" address="unix:///run/containerd/s/aee98baf66a7caa510240c444513d421273247543e48a0bf89d57b705bdae290" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:39.092273 systemd[1]: Started cri-containerd-65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198.scope - libcontainer container 65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198. Jul 10 07:54:39.179205 containerd[1564]: time="2025-07-10T07:54:39.179050722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-29tdv,Uid:dff6e30e-e51d-4033-b909-dfa260eb3714,Namespace:kube-system,Attempt:0,} returns sandbox id \"65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198\"" Jul 10 07:54:39.187984 containerd[1564]: time="2025-07-10T07:54:39.187722104Z" level=info msg="CreateContainer within sandbox \"65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 07:54:39.221213 containerd[1564]: time="2025-07-10T07:54:39.221087547Z" level=info msg="Container e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:39.225338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664330818.mount: Deactivated successfully. Jul 10 07:54:39.239477 containerd[1564]: time="2025-07-10T07:54:39.239369327Z" level=info msg="CreateContainer within sandbox \"65b8e613a4be098c71dca2f247a6eb9fac2eaa3d8a1bdb4613ff7a3960ac4198\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33\"" Jul 10 07:54:39.242178 containerd[1564]: time="2025-07-10T07:54:39.242135992Z" level=info msg="StartContainer for \"e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33\"" Jul 10 07:54:39.244207 containerd[1564]: time="2025-07-10T07:54:39.244155270Z" level=info msg="connecting to shim e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33" address="unix:///run/containerd/s/aee98baf66a7caa510240c444513d421273247543e48a0bf89d57b705bdae290" protocol=ttrpc version=3 Jul 10 07:54:39.292276 systemd[1]: Started cri-containerd-e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33.scope - libcontainer container e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33. Jul 10 07:54:39.398121 containerd[1564]: time="2025-07-10T07:54:39.398070970Z" level=info msg="StartContainer for \"e335800fa3c42be579b520c995b3c5c94bf6fac3be3d997987156c1b7cc96f33\" returns successfully" Jul 10 07:54:39.693745 containerd[1564]: time="2025-07-10T07:54:39.693630215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:39.697163 containerd[1564]: time="2025-07-10T07:54:39.697107696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 07:54:39.700154 containerd[1564]: time="2025-07-10T07:54:39.700098131Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:39.705529 containerd[1564]: time="2025-07-10T07:54:39.705444749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:39.708019 containerd[1564]: time="2025-07-10T07:54:39.706591475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 8.285359518s" Jul 10 07:54:39.708019 containerd[1564]: time="2025-07-10T07:54:39.706657961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 07:54:39.710199 containerd[1564]: time="2025-07-10T07:54:39.710122348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 07:54:39.712729 containerd[1564]: time="2025-07-10T07:54:39.712677414Z" level=info msg="CreateContainer within sandbox \"dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 07:54:39.743003 containerd[1564]: time="2025-07-10T07:54:39.742237780Z" level=info msg="Container 480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:39.772810 containerd[1564]: time="2025-07-10T07:54:39.772749285Z" level=info msg="CreateContainer within sandbox \"dc43df3cc10d4540e054dc974b52951bd960739e4927b6ef02db0f9016c4e538\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\"" Jul 10 07:54:39.775280 containerd[1564]: time="2025-07-10T07:54:39.775100688Z" level=info msg="StartContainer for \"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\"" Jul 10 07:54:39.779576 containerd[1564]: time="2025-07-10T07:54:39.779509141Z" level=info msg="connecting to shim 480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab" address="unix:///run/containerd/s/6fc4ced6031340f85a39760fff7963e0a27a92d2214b95fb366042bd61f8c510" protocol=ttrpc version=3 Jul 10 07:54:39.832214 systemd[1]: Started cri-containerd-480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab.scope - libcontainer container 480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab. Jul 10 07:54:40.126327 containerd[1564]: time="2025-07-10T07:54:40.126148485Z" level=info msg="StartContainer for \"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" returns successfully" Jul 10 07:54:40.167421 kubelet[2833]: I0710 07:54:40.166804 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-29tdv" podStartSLOduration=58.166292896 podStartE2EDuration="58.166292896s" podCreationTimestamp="2025-07-10 07:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:54:40.165512117 +0000 UTC m=+61.876534185" watchObservedRunningTime="2025-07-10 07:54:40.166292896 +0000 UTC m=+61.877315014" Jul 10 07:54:40.558947 containerd[1564]: time="2025-07-10T07:54:40.558821358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:40.681162 systemd-networkd[1452]: calib3564c56e56: Gained IPv6LL Jul 10 07:54:40.839553 systemd-networkd[1452]: calif1f2536badf: Link UP Jul 10 07:54:40.841309 systemd-networkd[1452]: calif1f2536badf: Gained carrier Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.721 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0 calico-apiserver-5cc4585dd9- calico-apiserver 26a4a6c2-2cf2-4935-b48d-ff922d2b77ea 851 0 2025-07-10 07:53:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc4585dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal calico-apiserver-5cc4585dd9-xhp97 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1f2536badf [] [] }} ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.721 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.785 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" HandleID="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.785 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" HandleID="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"calico-apiserver-5cc4585dd9-xhp97", "timestamp":"2025-07-10 07:54:40.785659222 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.786 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.786 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.786 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.795 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.801 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.807 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.810 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.814 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.815 [INFO][4697] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.817 [INFO][4697] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4 Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.822 [INFO][4697] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.831 [INFO][4697] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.5/26] block=192.168.83.0/26 handle="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.831 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.5/26] handle="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.831 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:40.874091 containerd[1564]: 2025-07-10 07:54:40.831 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.5/26] IPv6=[] ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" HandleID="k8s-pod-network.a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.834 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0", GenerateName:"calico-apiserver-5cc4585dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a4a6c2-2cf2-4935-b48d-ff922d2b77ea", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc4585dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"calico-apiserver-5cc4585dd9-xhp97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1f2536badf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.834 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.5/32] ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.834 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1f2536badf ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.843 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.844 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0", GenerateName:"calico-apiserver-5cc4585dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a4a6c2-2cf2-4935-b48d-ff922d2b77ea", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc4585dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4", Pod:"calico-apiserver-5cc4585dd9-xhp97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1f2536badf", MAC:"de:ce:e2:3d:ad:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:40.876886 containerd[1564]: 2025-07-10 07:54:40.871 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-xhp97" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--xhp97-eth0" Jul 10 07:54:40.927155 containerd[1564]: time="2025-07-10T07:54:40.927054230Z" level=info msg="connecting to shim a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4" address="unix:///run/containerd/s/2021b36b1b099354c13e925de23806f4f4efcf8220ba7e8a680f869b311bc12e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:40.962165 systemd[1]: Started cri-containerd-a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4.scope - libcontainer container a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4. Jul 10 07:54:41.018671 containerd[1564]: time="2025-07-10T07:54:41.018571649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-xhp97,Uid:26a4a6c2-2cf2-4935-b48d-ff922d2b77ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4\"" Jul 10 07:54:41.164361 kubelet[2833]: I0710 07:54:41.163028 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-c7jh9" podStartSLOduration=36.874748185 podStartE2EDuration="45.162997931s" podCreationTimestamp="2025-07-10 07:53:56 +0000 UTC" firstStartedPulling="2025-07-10 07:54:31.420432202 +0000 UTC m=+53.131454270" lastFinishedPulling="2025-07-10 07:54:39.708681928 +0000 UTC m=+61.419704016" observedRunningTime="2025-07-10 07:54:41.162112876 +0000 UTC m=+62.873134955" watchObservedRunningTime="2025-07-10 07:54:41.162997931 +0000 UTC m=+62.874019999" Jul 10 07:54:41.961211 systemd-networkd[1452]: calif1f2536badf: Gained IPv6LL Jul 10 07:54:42.235785 containerd[1564]: time="2025-07-10T07:54:42.235166951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:42.239819 containerd[1564]: time="2025-07-10T07:54:42.239786529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 07:54:42.243684 containerd[1564]: time="2025-07-10T07:54:42.243653762Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:42.247378 containerd[1564]: time="2025-07-10T07:54:42.247254785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:42.248025 containerd[1564]: time="2025-07-10T07:54:42.247942358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.537770217s" Jul 10 07:54:42.248025 containerd[1564]: time="2025-07-10T07:54:42.248025454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 07:54:42.253267 containerd[1564]: time="2025-07-10T07:54:42.252374032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 07:54:42.253437 containerd[1564]: time="2025-07-10T07:54:42.253409921Z" level=info msg="CreateContainer within sandbox \"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 07:54:42.289030 containerd[1564]: time="2025-07-10T07:54:42.288184102Z" level=info msg="Container c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:42.294633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082441418.mount: Deactivated successfully. Jul 10 07:54:42.320734 containerd[1564]: time="2025-07-10T07:54:42.320657315Z" level=info msg="CreateContainer within sandbox \"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76\"" Jul 10 07:54:42.322366 containerd[1564]: time="2025-07-10T07:54:42.322254859Z" level=info msg="StartContainer for \"c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76\"" Jul 10 07:54:42.324465 containerd[1564]: time="2025-07-10T07:54:42.324420762Z" level=info msg="connecting to shim c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76" address="unix:///run/containerd/s/b82374878b18353f2caec4c1af2a8e0bc89b1da1408146a03a73559b69f1aef6" protocol=ttrpc version=3 Jul 10 07:54:42.386150 systemd[1]: Started cri-containerd-c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76.scope - libcontainer container c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76. Jul 10 07:54:42.473775 containerd[1564]: time="2025-07-10T07:54:42.473631495Z" level=info msg="StartContainer for \"c9076d9aa0ce6a043b44499faa4a067c2983f8fcff6c4e8067f9220f2a9b2c76\" returns successfully" Jul 10 07:54:43.556239 containerd[1564]: time="2025-07-10T07:54:43.555931121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,}" Jul 10 07:54:43.819800 systemd-networkd[1452]: calied89f484ed1: Link UP Jul 10 07:54:43.822097 systemd-networkd[1452]: calied89f484ed1: Gained carrier Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.696 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0 calico-kube-controllers-7b9974968- calico-system a2d28237-8db2-489c-8a44-4db46a0ad1fd 853 0 2025-07-10 07:53:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b9974968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal calico-kube-controllers-7b9974968-qxv8n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied89f484ed1 [] [] }} ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.697 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.753 [INFO][4813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" HandleID="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.753 [INFO][4813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" HandleID="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"calico-kube-controllers-7b9974968-qxv8n", "timestamp":"2025-07-10 07:54:43.753025186 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.753 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.753 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.753 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.768 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.775 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.782 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.785 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.788 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.788 [INFO][4813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.791 [INFO][4813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.802 [INFO][4813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.810 [INFO][4813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.6/26] block=192.168.83.0/26 handle="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.811 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.6/26] handle="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.811 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:43.845873 containerd[1564]: 2025-07-10 07:54:43.811 [INFO][4813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.6/26] IPv6=[] ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" HandleID="k8s-pod-network.3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.814 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0", GenerateName:"calico-kube-controllers-7b9974968-", Namespace:"calico-system", SelfLink:"", UID:"a2d28237-8db2-489c-8a44-4db46a0ad1fd", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b9974968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7b9974968-qxv8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied89f484ed1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.814 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.6/32] ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.814 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied89f484ed1 ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.821 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.822 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0", GenerateName:"calico-kube-controllers-7b9974968-", Namespace:"calico-system", SelfLink:"", UID:"a2d28237-8db2-489c-8a44-4db46a0ad1fd", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b9974968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa", Pod:"calico-kube-controllers-7b9974968-qxv8n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied89f484ed1", MAC:"aa:ef:85:35:d4:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:43.849818 containerd[1564]: 2025-07-10 07:54:43.841 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" Namespace="calico-system" Pod="calico-kube-controllers-7b9974968-qxv8n" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--kube--controllers--7b9974968--qxv8n-eth0" Jul 10 07:54:43.892289 containerd[1564]: time="2025-07-10T07:54:43.891946722Z" level=info msg="connecting to shim 3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa" address="unix:///run/containerd/s/3d5364e5ddb98d02ec2851a3f72bf1e79047bb19a750561d94c048e5277146d0" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:43.947245 systemd[1]: Started cri-containerd-3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa.scope - libcontainer container 3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa. Jul 10 07:54:44.009284 containerd[1564]: time="2025-07-10T07:54:44.009211310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b9974968-qxv8n,Uid:a2d28237-8db2-489c-8a44-4db46a0ad1fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa\"" Jul 10 07:54:44.557421 containerd[1564]: time="2025-07-10T07:54:44.557287861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,}" Jul 10 07:54:44.559308 containerd[1564]: time="2025-07-10T07:54:44.559227799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,}" Jul 10 07:54:44.830433 systemd-networkd[1452]: cali06c322f7c56: Link UP Jul 10 07:54:44.836743 systemd-networkd[1452]: cali06c322f7c56: Gained carrier Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.680 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0 calico-apiserver-5cc4585dd9- calico-apiserver c7329202-feb1-456c-aeb4-f8cc1e636843 856 0 2025-07-10 07:53:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc4585dd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal calico-apiserver-5cc4585dd9-wzfv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06c322f7c56 [] [] }} ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.681 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.729 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" HandleID="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.729 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" HandleID="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"calico-apiserver-5cc4585dd9-wzfv4", "timestamp":"2025-07-10 07:54:44.729511776 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.729 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.729 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.729 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.750 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.757 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.768 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.774 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.781 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.781 [INFO][4916] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.784 [INFO][4916] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693 Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.798 [INFO][4916] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.815 [INFO][4916] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.7/26] block=192.168.83.0/26 handle="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.815 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.7/26] handle="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.817 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:44.877179 containerd[1564]: 2025-07-10 07:54:44.817 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.7/26] IPv6=[] ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" HandleID="k8s-pod-network.da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.821 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0", GenerateName:"calico-apiserver-5cc4585dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7329202-feb1-456c-aeb4-f8cc1e636843", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc4585dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"calico-apiserver-5cc4585dd9-wzfv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06c322f7c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.821 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.7/32] ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.821 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06c322f7c56 ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.842 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.844 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0", GenerateName:"calico-apiserver-5cc4585dd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7329202-feb1-456c-aeb4-f8cc1e636843", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc4585dd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693", Pod:"calico-apiserver-5cc4585dd9-wzfv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06c322f7c56", MAC:"c2:57:b3:7b:06:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:44.880001 containerd[1564]: 2025-07-10 07:54:44.872 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" Namespace="calico-apiserver" Pod="calico-apiserver-5cc4585dd9-wzfv4" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-calico--apiserver--5cc4585dd9--wzfv4-eth0" Jul 10 07:54:44.976206 containerd[1564]: time="2025-07-10T07:54:44.976135138Z" level=info msg="connecting to shim da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693" address="unix:///run/containerd/s/5857b036bce4d2938732bd44aec28999b320a840338ba8715be541fb188449c2" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:45.061178 systemd[1]: Started cri-containerd-da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693.scope - libcontainer container da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693. Jul 10 07:54:45.078213 systemd-networkd[1452]: cali7afdf187d22: Link UP Jul 10 07:54:45.079487 systemd-networkd[1452]: cali7afdf187d22: Gained carrier Jul 10 07:54:45.097122 systemd-networkd[1452]: calied89f484ed1: Gained IPv6LL Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.675 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0 coredns-7c65d6cfc9- kube-system 81234764-a63b-4c8c-8451-6099c0eb34d1 852 0 2025-07-10 07:53:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4391-0-0-n-fdb14ef6d8.novalocal coredns-7c65d6cfc9-w9d5t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7afdf187d22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.676 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.747 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" HandleID="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.747 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" HandleID="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4391-0-0-n-fdb14ef6d8.novalocal", "pod":"coredns-7c65d6cfc9-w9d5t", "timestamp":"2025-07-10 07:54:44.747686273 +0000 UTC"}, Hostname:"ci-4391-0-0-n-fdb14ef6d8.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.748 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.817 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.817 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-fdb14ef6d8.novalocal' Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.873 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.903 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.917 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.933 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.965 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.967 [INFO][4911] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:44.985 [INFO][4911] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8 Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:45.017 [INFO][4911] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:45.057 [INFO][4911] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.8/26] block=192.168.83.0/26 handle="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:45.057 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.8/26] handle="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" host="ci-4391-0-0-n-fdb14ef6d8.novalocal" Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:45.057 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 07:54:45.111342 containerd[1564]: 2025-07-10 07:54:45.058 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.8/26] IPv6=[] ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" HandleID="k8s-pod-network.f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Workload="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.071 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"81234764-a63b-4c8c-8451-6099c0eb34d1", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-w9d5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7afdf187d22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.072 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.8/32] ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.072 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7afdf187d22 ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.079 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.081 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"81234764-a63b-4c8c-8451-6099c0eb34d1", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 7, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-fdb14ef6d8.novalocal", ContainerID:"f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8", Pod:"coredns-7c65d6cfc9-w9d5t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7afdf187d22", MAC:"be:ab:a3:2d:47:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 07:54:45.113112 containerd[1564]: 2025-07-10 07:54:45.107 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9d5t" WorkloadEndpoint="ci--4391--0--0--n--fdb14ef6d8.novalocal-k8s-coredns--7c65d6cfc9--w9d5t-eth0" Jul 10 07:54:45.170364 containerd[1564]: time="2025-07-10T07:54:45.170280066Z" level=info msg="connecting to shim f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8" address="unix:///run/containerd/s/afb38f74a33c6068742fc19736c7c26c8c750fe41d2c550018fe80da3f5a6521" namespace=k8s.io protocol=ttrpc version=3 Jul 10 07:54:45.233409 systemd[1]: Started cri-containerd-f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8.scope - libcontainer container f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8. Jul 10 07:54:45.359855 containerd[1564]: time="2025-07-10T07:54:45.359393361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc4585dd9-wzfv4,Uid:c7329202-feb1-456c-aeb4-f8cc1e636843,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693\"" Jul 10 07:54:45.368022 containerd[1564]: time="2025-07-10T07:54:45.367832089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9d5t,Uid:81234764-a63b-4c8c-8451-6099c0eb34d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8\"" Jul 10 07:54:45.374671 containerd[1564]: time="2025-07-10T07:54:45.374627587Z" level=info msg="CreateContainer within sandbox \"f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 07:54:45.395990 containerd[1564]: time="2025-07-10T07:54:45.395129628Z" level=info msg="Container 1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:45.413796 containerd[1564]: time="2025-07-10T07:54:45.413739612Z" level=info msg="CreateContainer within sandbox \"f10f8679fa7298b537f47660e77cbdd3cf02d8cb51719413de90b3f4cef4eeb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6\"" Jul 10 07:54:45.415008 containerd[1564]: time="2025-07-10T07:54:45.414461450Z" level=info msg="StartContainer for \"1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6\"" Jul 10 07:54:45.415869 containerd[1564]: time="2025-07-10T07:54:45.415755804Z" level=info msg="connecting to shim 1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6" address="unix:///run/containerd/s/afb38f74a33c6068742fc19736c7c26c8c750fe41d2c550018fe80da3f5a6521" protocol=ttrpc version=3 Jul 10 07:54:45.444873 containerd[1564]: time="2025-07-10T07:54:45.444831737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:45.446175 systemd[1]: Started cri-containerd-1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6.scope - libcontainer container 1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6. Jul 10 07:54:45.447693 containerd[1564]: time="2025-07-10T07:54:45.447664885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 07:54:45.450234 containerd[1564]: time="2025-07-10T07:54:45.450173291Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:45.459132 containerd[1564]: time="2025-07-10T07:54:45.459076974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:45.460349 containerd[1564]: time="2025-07-10T07:54:45.460310864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.207885414s" Jul 10 07:54:45.460468 containerd[1564]: time="2025-07-10T07:54:45.460447430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 07:54:45.463222 containerd[1564]: time="2025-07-10T07:54:45.463187263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 07:54:45.466110 containerd[1564]: time="2025-07-10T07:54:45.465828830Z" level=info msg="CreateContainer within sandbox \"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 07:54:45.496658 containerd[1564]: time="2025-07-10T07:54:45.495785890Z" level=info msg="Container 334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:45.519811 containerd[1564]: time="2025-07-10T07:54:45.519751294Z" level=info msg="StartContainer for \"1407f2b297f2d56f0691f7a35dbc4e3e9c9874874867da7fbd7a1a5d3dcb5be6\" returns successfully" Jul 10 07:54:45.533596 containerd[1564]: time="2025-07-10T07:54:45.533534713Z" level=info msg="CreateContainer within sandbox \"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b\"" Jul 10 07:54:45.536404 containerd[1564]: time="2025-07-10T07:54:45.536316062Z" level=info msg="StartContainer for \"334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b\"" Jul 10 07:54:45.540385 containerd[1564]: time="2025-07-10T07:54:45.540056537Z" level=info msg="connecting to shim 334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b" address="unix:///run/containerd/s/ed819b482db7e6e256ac72e9d1b7bf740c79961a60746cccfa79485e3e9db552" protocol=ttrpc version=3 Jul 10 07:54:45.592237 systemd[1]: Started cri-containerd-334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b.scope - libcontainer container 334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b. Jul 10 07:54:45.715540 containerd[1564]: time="2025-07-10T07:54:45.715487262Z" level=info msg="StartContainer for \"334fd7b10e435191e3f9953bee291d1bb24f34149dce26e0b657f2371861842b\" returns successfully" Jul 10 07:54:45.742413 containerd[1564]: time="2025-07-10T07:54:45.742354162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"1911834e0310238c801a9baf04489598e8ce36053e393fca4b04ae44d925171e\" pid:5071 exited_at:{seconds:1752134085 nanos:741050862}" Jul 10 07:54:45.912951 containerd[1564]: time="2025-07-10T07:54:45.912873573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"89f66b4d7c57268872fcc6debb4273a791764ea733f2af9122355e78feb491e1\" pid:5138 exited_at:{seconds:1752134085 nanos:912417585}" Jul 10 07:54:46.224494 kubelet[2833]: I0710 07:54:46.224325 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w9d5t" podStartSLOduration=64.224103349 podStartE2EDuration="1m4.224103349s" podCreationTimestamp="2025-07-10 07:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 07:54:46.222264541 +0000 UTC m=+67.933286649" watchObservedRunningTime="2025-07-10 07:54:46.224103349 +0000 UTC m=+67.935125467" Jul 10 07:54:46.632372 systemd-networkd[1452]: cali7afdf187d22: Gained IPv6LL Jul 10 07:54:46.701176 systemd-networkd[1452]: cali06c322f7c56: Gained IPv6LL Jul 10 07:54:50.643767 containerd[1564]: time="2025-07-10T07:54:50.643585164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:50.646850 containerd[1564]: time="2025-07-10T07:54:50.646352747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 07:54:50.648932 containerd[1564]: time="2025-07-10T07:54:50.648754081Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:50.688736 containerd[1564]: time="2025-07-10T07:54:50.688579928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.225343593s" Jul 10 07:54:50.689320 containerd[1564]: time="2025-07-10T07:54:50.689106038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 07:54:50.689924 containerd[1564]: time="2025-07-10T07:54:50.688704904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:50.698551 containerd[1564]: time="2025-07-10T07:54:50.698384089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 07:54:50.711510 containerd[1564]: time="2025-07-10T07:54:50.711319216Z" level=info msg="CreateContainer within sandbox \"a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 07:54:50.730674 containerd[1564]: time="2025-07-10T07:54:50.728126842Z" level=info msg="Container 53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:50.731155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144266298.mount: Deactivated successfully. Jul 10 07:54:50.745929 containerd[1564]: time="2025-07-10T07:54:50.745807489Z" level=info msg="CreateContainer within sandbox \"a3e2b468c470e0a18ee09f611828ceeb77bc3216a268ae7aed662a1e081036e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472\"" Jul 10 07:54:50.748837 containerd[1564]: time="2025-07-10T07:54:50.748789094Z" level=info msg="StartContainer for \"53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472\"" Jul 10 07:54:50.750845 containerd[1564]: time="2025-07-10T07:54:50.750695208Z" level=info msg="connecting to shim 53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472" address="unix:///run/containerd/s/2021b36b1b099354c13e925de23806f4f4efcf8220ba7e8a680f869b311bc12e" protocol=ttrpc version=3 Jul 10 07:54:50.782140 systemd[1]: Started cri-containerd-53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472.scope - libcontainer container 53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472. Jul 10 07:54:50.870285 containerd[1564]: time="2025-07-10T07:54:50.870231922Z" level=info msg="StartContainer for \"53ac714f7fe2af7767e6de7df46f9cc5c4337125d433df1998f31a6b03351472\" returns successfully" Jul 10 07:54:51.241360 kubelet[2833]: I0710 07:54:51.240667 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cc4585dd9-xhp97" podStartSLOduration=49.562985851 podStartE2EDuration="59.240646685s" podCreationTimestamp="2025-07-10 07:53:52 +0000 UTC" firstStartedPulling="2025-07-10 07:54:41.020143115 +0000 UTC m=+62.731165193" lastFinishedPulling="2025-07-10 07:54:50.697803909 +0000 UTC m=+72.408826027" observedRunningTime="2025-07-10 07:54:51.236087084 +0000 UTC m=+72.947109182" watchObservedRunningTime="2025-07-10 07:54:51.240646685 +0000 UTC m=+72.951668763" Jul 10 07:54:52.217461 kubelet[2833]: I0710 07:54:52.216534 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 07:54:53.439989 containerd[1564]: time="2025-07-10T07:54:53.439646421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:53.441282 containerd[1564]: time="2025-07-10T07:54:53.441249785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 07:54:53.443089 containerd[1564]: time="2025-07-10T07:54:53.443032385Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:53.448151 containerd[1564]: time="2025-07-10T07:54:53.447574703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:53.448543 containerd[1564]: time="2025-07-10T07:54:53.448511214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.750014282s" Jul 10 07:54:53.448647 containerd[1564]: time="2025-07-10T07:54:53.448627331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 07:54:53.450172 containerd[1564]: time="2025-07-10T07:54:53.450151016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 07:54:53.453151 containerd[1564]: time="2025-07-10T07:54:53.453103916Z" level=info msg="CreateContainer within sandbox \"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 07:54:53.482010 containerd[1564]: time="2025-07-10T07:54:53.481105668Z" level=info msg="Container 428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:53.498154 containerd[1564]: time="2025-07-10T07:54:53.498090233Z" level=info msg="CreateContainer within sandbox \"46d283f26e6fcdc6e7b3eacfa75a56f69a0c0c2b60bc0810f798ca978e4e2491\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133\"" Jul 10 07:54:53.499378 containerd[1564]: time="2025-07-10T07:54:53.499163770Z" level=info msg="StartContainer for \"428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133\"" Jul 10 07:54:53.503301 containerd[1564]: time="2025-07-10T07:54:53.503259869Z" level=info msg="connecting to shim 428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133" address="unix:///run/containerd/s/b82374878b18353f2caec4c1af2a8e0bc89b1da1408146a03a73559b69f1aef6" protocol=ttrpc version=3 Jul 10 07:54:53.561234 systemd[1]: Started cri-containerd-428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133.scope - libcontainer container 428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133. Jul 10 07:54:53.628876 containerd[1564]: time="2025-07-10T07:54:53.628824112Z" level=info msg="StartContainer for \"428e4b67b60e9f54b30c3ab09c0d09ac72a9ee857cbcc9f13c23910e435c4133\" returns successfully" Jul 10 07:54:53.716625 kubelet[2833]: I0710 07:54:53.716049 2833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 07:54:53.716625 kubelet[2833]: I0710 07:54:53.716227 2833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 07:54:54.279757 kubelet[2833]: I0710 07:54:54.277432 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pwddw" podStartSLOduration=36.283659036 podStartE2EDuration="58.277391515s" podCreationTimestamp="2025-07-10 07:53:56 +0000 UTC" firstStartedPulling="2025-07-10 07:54:31.456250912 +0000 UTC m=+53.167272980" lastFinishedPulling="2025-07-10 07:54:53.449983391 +0000 UTC m=+75.161005459" observedRunningTime="2025-07-10 07:54:54.274860308 +0000 UTC m=+75.985882426" watchObservedRunningTime="2025-07-10 07:54:54.277391515 +0000 UTC m=+75.988413633" Jul 10 07:54:59.493938 containerd[1564]: time="2025-07-10T07:54:59.493666133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:59.501413 containerd[1564]: time="2025-07-10T07:54:59.501198909Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:59.501413 containerd[1564]: time="2025-07-10T07:54:59.501281474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 07:54:59.512883 containerd[1564]: time="2025-07-10T07:54:59.512793597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:54:59.513901 containerd[1564]: time="2025-07-10T07:54:59.513841706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.062420392s" Jul 10 07:54:59.514294 containerd[1564]: time="2025-07-10T07:54:59.514269130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 07:54:59.520236 containerd[1564]: time="2025-07-10T07:54:59.518345630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 07:54:59.553166 containerd[1564]: time="2025-07-10T07:54:59.553106872Z" level=info msg="CreateContainer within sandbox \"3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 07:54:59.569983 containerd[1564]: time="2025-07-10T07:54:59.569350275Z" level=info msg="Container 8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:54:59.577511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount92028486.mount: Deactivated successfully. Jul 10 07:54:59.597845 containerd[1564]: time="2025-07-10T07:54:59.597756356Z" level=info msg="CreateContainer within sandbox \"3864ef46e63a3d5892bfe4020f3c69b6766bbef2299971f538d1cb5df0c780aa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\"" Jul 10 07:54:59.601165 containerd[1564]: time="2025-07-10T07:54:59.601043553Z" level=info msg="StartContainer for \"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\"" Jul 10 07:54:59.604758 containerd[1564]: time="2025-07-10T07:54:59.604686728Z" level=info msg="connecting to shim 8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007" address="unix:///run/containerd/s/3d5364e5ddb98d02ec2851a3f72bf1e79047bb19a750561d94c048e5277146d0" protocol=ttrpc version=3 Jul 10 07:54:59.654315 systemd[1]: Started cri-containerd-8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007.scope - libcontainer container 8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007. Jul 10 07:54:59.778138 containerd[1564]: time="2025-07-10T07:54:59.777998819Z" level=info msg="StartContainer for \"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" returns successfully" Jul 10 07:55:00.058947 containerd[1564]: time="2025-07-10T07:55:00.058703142Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:55:00.061677 containerd[1564]: time="2025-07-10T07:55:00.060894430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 07:55:00.065574 containerd[1564]: time="2025-07-10T07:55:00.065500424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 547.085624ms" Jul 10 07:55:00.065574 containerd[1564]: time="2025-07-10T07:55:00.065571778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 07:55:00.070456 containerd[1564]: time="2025-07-10T07:55:00.070360197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 07:55:00.076208 containerd[1564]: time="2025-07-10T07:55:00.076087138Z" level=info msg="CreateContainer within sandbox \"da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 07:55:00.101938 containerd[1564]: time="2025-07-10T07:55:00.101695217Z" level=info msg="Container 66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:55:00.133837 containerd[1564]: time="2025-07-10T07:55:00.133325434Z" level=info msg="CreateContainer within sandbox \"da7b9f1f3523fb6ade1791058c7656a165cf3e9adb7fb9df55b407db31cdf693\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8\"" Jul 10 07:55:00.135058 containerd[1564]: time="2025-07-10T07:55:00.135002254Z" level=info msg="StartContainer for \"66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8\"" Jul 10 07:55:00.137606 containerd[1564]: time="2025-07-10T07:55:00.137555312Z" level=info msg="connecting to shim 66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8" address="unix:///run/containerd/s/5857b036bce4d2938732bd44aec28999b320a840338ba8715be541fb188449c2" protocol=ttrpc version=3 Jul 10 07:55:00.188294 systemd[1]: Started cri-containerd-66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8.scope - libcontainer container 66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8. Jul 10 07:55:00.310402 containerd[1564]: time="2025-07-10T07:55:00.309890264Z" level=info msg="StartContainer for \"66ddc5ea484fba9afc457d5901b80f000b919328f6dd3b2258eca8490c41eef8\" returns successfully" Jul 10 07:55:00.466599 containerd[1564]: time="2025-07-10T07:55:00.466418625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"55565e7f5c291fb627f42c0042dec4bc3002b08a473668513a9a810558921148\" pid:5325 exited_at:{seconds:1752134100 nanos:464174979}" Jul 10 07:55:00.494749 kubelet[2833]: I0710 07:55:00.494605 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b9974968-qxv8n" podStartSLOduration=47.988938458 podStartE2EDuration="1m3.494204026s" podCreationTimestamp="2025-07-10 07:53:57 +0000 UTC" firstStartedPulling="2025-07-10 07:54:44.010917378 +0000 UTC m=+65.721939446" lastFinishedPulling="2025-07-10 07:54:59.516182936 +0000 UTC m=+81.227205014" observedRunningTime="2025-07-10 07:55:00.311264075 +0000 UTC m=+82.022286183" watchObservedRunningTime="2025-07-10 07:55:00.494204026 +0000 UTC m=+82.205226094" Jul 10 07:55:01.330405 kubelet[2833]: I0710 07:55:01.330035 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cc4585dd9-wzfv4" podStartSLOduration=54.624244218 podStartE2EDuration="1m9.330010616s" podCreationTimestamp="2025-07-10 07:53:52 +0000 UTC" firstStartedPulling="2025-07-10 07:54:45.363049346 +0000 UTC m=+67.074071444" lastFinishedPulling="2025-07-10 07:55:00.068815754 +0000 UTC m=+81.779837842" observedRunningTime="2025-07-10 07:55:01.321559396 +0000 UTC m=+83.032581464" watchObservedRunningTime="2025-07-10 07:55:01.330010616 +0000 UTC m=+83.041032694" Jul 10 07:55:01.422536 containerd[1564]: time="2025-07-10T07:55:01.422477684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"284d6794e65a8f4803ba6f3948762ceecb6c6dc1f49eb89466c24b86707180ba\" pid:5364 exited_at:{seconds:1752134101 nanos:421945333}" Jul 10 07:55:03.854756 kubelet[2833]: I0710 07:55:03.854699 2833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 07:55:04.086979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776592274.mount: Deactivated successfully. Jul 10 07:55:04.200065 containerd[1564]: time="2025-07-10T07:55:04.199109445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:55:04.201298 containerd[1564]: time="2025-07-10T07:55:04.201101467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 07:55:04.203164 containerd[1564]: time="2025-07-10T07:55:04.203117715Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:55:04.207288 containerd[1564]: time="2025-07-10T07:55:04.207191560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 07:55:04.208482 containerd[1564]: time="2025-07-10T07:55:04.208006170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.137542098s" Jul 10 07:55:04.208482 containerd[1564]: time="2025-07-10T07:55:04.208066253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 07:55:04.213807 containerd[1564]: time="2025-07-10T07:55:04.213742879Z" level=info msg="CreateContainer within sandbox \"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 07:55:04.234500 containerd[1564]: time="2025-07-10T07:55:04.234281256Z" level=info msg="Container 0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000: CDI devices from CRI Config.CDIDevices: []" Jul 10 07:55:04.243469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524867539.mount: Deactivated successfully. Jul 10 07:55:04.263077 containerd[1564]: time="2025-07-10T07:55:04.263017517Z" level=info msg="CreateContainer within sandbox \"8f35841a81722ecb138eadc02dd4b69fb9860d2b00a8c1626c4ff4031c310957\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000\"" Jul 10 07:55:04.266854 containerd[1564]: time="2025-07-10T07:55:04.264839640Z" level=info msg="StartContainer for \"0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000\"" Jul 10 07:55:04.267880 containerd[1564]: time="2025-07-10T07:55:04.267838324Z" level=info msg="connecting to shim 0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000" address="unix:///run/containerd/s/ed819b482db7e6e256ac72e9d1b7bf740c79961a60746cccfa79485e3e9db552" protocol=ttrpc version=3 Jul 10 07:55:04.314577 systemd[1]: Started cri-containerd-0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000.scope - libcontainer container 0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000. Jul 10 07:55:04.406491 containerd[1564]: time="2025-07-10T07:55:04.406364092Z" level=info msg="StartContainer for \"0ebf3154cd63162da2a84f3f2c7286b109f937f8371dd39110062b17bdbef000\" returns successfully" Jul 10 07:55:05.366769 kubelet[2833]: I0710 07:55:05.366536 2833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c7cdc94b5-gnt27" podStartSLOduration=1.970502223 podStartE2EDuration="34.366391843s" podCreationTimestamp="2025-07-10 07:54:31 +0000 UTC" firstStartedPulling="2025-07-10 07:54:31.813917112 +0000 UTC m=+53.524939181" lastFinishedPulling="2025-07-10 07:55:04.209806733 +0000 UTC m=+85.920828801" observedRunningTime="2025-07-10 07:55:05.363921763 +0000 UTC m=+87.074943841" watchObservedRunningTime="2025-07-10 07:55:05.366391843 +0000 UTC m=+87.077413912" Jul 10 07:55:15.584989 containerd[1564]: time="2025-07-10T07:55:15.584431610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"c6d1cd795d7b6f95d4f494b0dfd2eae388023f8213c570d8f9f357e76d8c6368\" pid:5451 exited_at:{seconds:1752134115 nanos:583852413}" Jul 10 07:55:15.791884 containerd[1564]: time="2025-07-10T07:55:15.791774300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"f6b5d04c6a9aea4acdd4db993d59c332b591ef55450745eb1527921e291dc6ad\" pid:5468 exited_at:{seconds:1752134115 nanos:790312955}" Jul 10 07:55:16.523229 systemd[1]: Started sshd@9-172.24.4.91:22-172.24.4.1:50256.service - OpenSSH per-connection server daemon (172.24.4.1:50256). Jul 10 07:55:17.996591 sshd[5486]: Accepted publickey for core from 172.24.4.1 port 50256 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:18.000470 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:18.011595 systemd-logind[1534]: New session 12 of user core. Jul 10 07:55:18.018160 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 07:55:18.866346 sshd[5489]: Connection closed by 172.24.4.1 port 50256 Jul 10 07:55:18.868233 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:18.872685 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Jul 10 07:55:18.873750 systemd[1]: sshd@9-172.24.4.91:22-172.24.4.1:50256.service: Deactivated successfully. Jul 10 07:55:18.877416 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 07:55:18.883519 systemd-logind[1534]: Removed session 12. Jul 10 07:55:23.908584 systemd[1]: Started sshd@10-172.24.4.91:22-172.24.4.1:58434.service - OpenSSH per-connection server daemon (172.24.4.1:58434). Jul 10 07:55:25.060538 sshd[5503]: Accepted publickey for core from 172.24.4.1 port 58434 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:25.061072 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:25.070856 systemd-logind[1534]: New session 13 of user core. Jul 10 07:55:25.076537 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 07:55:25.722468 sshd[5506]: Connection closed by 172.24.4.1 port 58434 Jul 10 07:55:25.723469 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:25.731054 systemd[1]: sshd@10-172.24.4.91:22-172.24.4.1:58434.service: Deactivated successfully. Jul 10 07:55:25.736069 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 07:55:25.739676 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Jul 10 07:55:25.742065 systemd-logind[1534]: Removed session 13. Jul 10 07:55:27.070272 containerd[1564]: time="2025-07-10T07:55:27.070094233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"ffcc69b8dbc5f343daac68be75c76465254e4e1edd4f1e46d852a7a90424e38f\" pid:5532 exited_at:{seconds:1752134127 nanos:69054561}" Jul 10 07:55:30.516780 containerd[1564]: time="2025-07-10T07:55:30.516699087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"a64a584329c2bf4ddb1157b345f1798e53552a3b2418be5e8aac8950bf4111de\" pid:5555 exited_at:{seconds:1752134130 nanos:515885479}" Jul 10 07:55:30.738240 systemd[1]: Started sshd@11-172.24.4.91:22-172.24.4.1:58440.service - OpenSSH per-connection server daemon (172.24.4.1:58440). Jul 10 07:55:32.337666 sshd[5568]: Accepted publickey for core from 172.24.4.1 port 58440 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:32.355386 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:32.374847 systemd-logind[1534]: New session 14 of user core. Jul 10 07:55:32.381524 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 07:55:33.656286 sshd[5571]: Connection closed by 172.24.4.1 port 58440 Jul 10 07:55:33.658558 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:33.672472 systemd[1]: sshd@11-172.24.4.91:22-172.24.4.1:58440.service: Deactivated successfully. Jul 10 07:55:33.678539 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 07:55:33.682183 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Jul 10 07:55:33.687044 systemd-logind[1534]: Removed session 14. Jul 10 07:55:38.684978 systemd[1]: Started sshd@12-172.24.4.91:22-172.24.4.1:60744.service - OpenSSH per-connection server daemon (172.24.4.1:60744). Jul 10 07:55:39.747093 sshd[5595]: Accepted publickey for core from 172.24.4.1 port 60744 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:39.750282 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:39.761938 systemd-logind[1534]: New session 15 of user core. Jul 10 07:55:39.769478 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 07:55:40.589229 sshd[5599]: Connection closed by 172.24.4.1 port 60744 Jul 10 07:55:40.591343 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:40.605096 systemd[1]: sshd@12-172.24.4.91:22-172.24.4.1:60744.service: Deactivated successfully. Jul 10 07:55:40.608333 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 07:55:40.612920 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Jul 10 07:55:40.616636 systemd-logind[1534]: Removed session 15. Jul 10 07:55:40.620086 systemd[1]: Started sshd@13-172.24.4.91:22-172.24.4.1:60748.service - OpenSSH per-connection server daemon (172.24.4.1:60748). Jul 10 07:55:41.809383 sshd[5612]: Accepted publickey for core from 172.24.4.1 port 60748 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:41.811675 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:41.821241 systemd-logind[1534]: New session 16 of user core. Jul 10 07:55:41.827132 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 07:55:42.649242 sshd[5615]: Connection closed by 172.24.4.1 port 60748 Jul 10 07:55:42.651245 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:42.677530 systemd[1]: sshd@13-172.24.4.91:22-172.24.4.1:60748.service: Deactivated successfully. Jul 10 07:55:42.689107 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 07:55:42.692366 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Jul 10 07:55:42.703250 systemd[1]: Started sshd@14-172.24.4.91:22-172.24.4.1:60764.service - OpenSSH per-connection server daemon (172.24.4.1:60764). Jul 10 07:55:42.707128 systemd-logind[1534]: Removed session 16. Jul 10 07:55:43.884052 sshd[5625]: Accepted publickey for core from 172.24.4.1 port 60764 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:43.887720 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:43.899443 systemd-logind[1534]: New session 17 of user core. Jul 10 07:55:43.906109 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 07:55:44.713686 sshd[5630]: Connection closed by 172.24.4.1 port 60764 Jul 10 07:55:44.716081 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:44.721841 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Jul 10 07:55:44.722465 systemd[1]: sshd@14-172.24.4.91:22-172.24.4.1:60764.service: Deactivated successfully. Jul 10 07:55:44.727535 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 07:55:44.732508 systemd-logind[1534]: Removed session 17. Jul 10 07:55:45.486550 containerd[1564]: time="2025-07-10T07:55:45.486251496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"f7a62e28375f9c37ff43601e348046809d03e47250e9e24523496abb1b458979\" pid:5653 exited_at:{seconds:1752134145 nanos:483991803}" Jul 10 07:55:45.706605 containerd[1564]: time="2025-07-10T07:55:45.706550281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"7d5865b4e80b07058fcae7495fb33f846a5d4cd0fd38277c55e4c5dce90f15c5\" pid:5676 exited_at:{seconds:1752134145 nanos:705871097}" Jul 10 07:55:48.150281 containerd[1564]: time="2025-07-10T07:55:48.150229829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"bd0abeab71e4ab67c88c7c1f8a220afbd8acd30c63652250ca6823c4a0376515\" pid:5702 exited_at:{seconds:1752134148 nanos:149436620}" Jul 10 07:55:49.735234 systemd[1]: Started sshd@15-172.24.4.91:22-172.24.4.1:55690.service - OpenSSH per-connection server daemon (172.24.4.1:55690). Jul 10 07:55:51.122817 sshd[5712]: Accepted publickey for core from 172.24.4.1 port 55690 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:51.123476 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:51.167990 systemd-logind[1534]: New session 18 of user core. Jul 10 07:55:51.171625 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 07:55:51.945904 sshd[5715]: Connection closed by 172.24.4.1 port 55690 Jul 10 07:55:51.946673 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:51.956120 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Jul 10 07:55:51.960303 systemd[1]: sshd@15-172.24.4.91:22-172.24.4.1:55690.service: Deactivated successfully. Jul 10 07:55:51.968507 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 07:55:51.975278 systemd-logind[1534]: Removed session 18. Jul 10 07:55:56.963216 systemd[1]: Started sshd@16-172.24.4.91:22-172.24.4.1:50294.service - OpenSSH per-connection server daemon (172.24.4.1:50294). Jul 10 07:55:58.192906 sshd[5735]: Accepted publickey for core from 172.24.4.1 port 50294 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:55:58.197606 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:55:58.211165 systemd-logind[1534]: New session 19 of user core. Jul 10 07:55:58.220496 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 07:55:58.989333 sshd[5738]: Connection closed by 172.24.4.1 port 50294 Jul 10 07:55:58.990180 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jul 10 07:55:58.995600 systemd[1]: sshd@16-172.24.4.91:22-172.24.4.1:50294.service: Deactivated successfully. Jul 10 07:55:59.001916 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 07:55:59.005050 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Jul 10 07:55:59.008165 systemd-logind[1534]: Removed session 19. Jul 10 07:56:00.416641 containerd[1564]: time="2025-07-10T07:56:00.416579029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"bb51fe0f4d722c512426068eb5e18e8910daf56234cec0272a1c745cb057969e\" pid:5761 exited_at:{seconds:1752134160 nanos:415689560}" Jul 10 07:56:04.007305 systemd[1]: Started sshd@17-172.24.4.91:22-172.24.4.1:34752.service - OpenSSH per-connection server daemon (172.24.4.1:34752). Jul 10 07:56:05.170062 sshd[5774]: Accepted publickey for core from 172.24.4.1 port 34752 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:05.173308 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:05.187071 systemd-logind[1534]: New session 20 of user core. Jul 10 07:56:05.190221 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 07:56:05.904161 sshd[5777]: Connection closed by 172.24.4.1 port 34752 Jul 10 07:56:05.904049 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:05.910223 systemd[1]: sshd@17-172.24.4.91:22-172.24.4.1:34752.service: Deactivated successfully. Jul 10 07:56:05.914796 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 07:56:05.919467 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Jul 10 07:56:05.922274 systemd-logind[1534]: Removed session 20. Jul 10 07:56:10.946761 systemd[1]: Started sshd@18-172.24.4.91:22-172.24.4.1:34758.service - OpenSSH per-connection server daemon (172.24.4.1:34758). Jul 10 07:56:12.472982 sshd[5804]: Accepted publickey for core from 172.24.4.1 port 34758 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:12.479900 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:12.498636 systemd-logind[1534]: New session 21 of user core. Jul 10 07:56:12.508557 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 07:56:13.219184 sshd[5814]: Connection closed by 172.24.4.1 port 34758 Jul 10 07:56:13.223905 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:13.232224 systemd[1]: sshd@18-172.24.4.91:22-172.24.4.1:34758.service: Deactivated successfully. Jul 10 07:56:13.235737 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 07:56:13.239392 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Jul 10 07:56:13.243741 systemd[1]: Started sshd@19-172.24.4.91:22-172.24.4.1:34772.service - OpenSSH per-connection server daemon (172.24.4.1:34772). Jul 10 07:56:13.248171 systemd-logind[1534]: Removed session 21. Jul 10 07:56:14.809383 sshd[5825]: Accepted publickey for core from 172.24.4.1 port 34772 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:14.812156 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:14.819554 systemd-logind[1534]: New session 22 of user core. Jul 10 07:56:14.825111 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 07:56:15.505792 containerd[1564]: time="2025-07-10T07:56:15.505669217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"554eb9342a8e0053d06bd28a18ebc85b568a59df399a4c7b75d25cd7407c749b\" pid:5844 exited_at:{seconds:1752134175 nanos:503177996}" Jul 10 07:56:15.629117 containerd[1564]: time="2025-07-10T07:56:15.628651702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"9ab7d5946c468cb58c2a94f336c0af5166e84f2601c2268a354a8410bcc20a04\" pid:5864 exited_at:{seconds:1752134175 nanos:627349354}" Jul 10 07:56:16.270869 sshd[5830]: Connection closed by 172.24.4.1 port 34772 Jul 10 07:56:16.272924 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:16.283381 systemd[1]: sshd@19-172.24.4.91:22-172.24.4.1:34772.service: Deactivated successfully. Jul 10 07:56:16.288091 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 07:56:16.290172 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Jul 10 07:56:16.297219 systemd[1]: Started sshd@20-172.24.4.91:22-172.24.4.1:46266.service - OpenSSH per-connection server daemon (172.24.4.1:46266). Jul 10 07:56:16.300258 systemd-logind[1534]: Removed session 22. Jul 10 07:56:17.718340 sshd[5886]: Accepted publickey for core from 172.24.4.1 port 46266 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:17.720506 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:17.729490 systemd-logind[1534]: New session 23 of user core. Jul 10 07:56:17.737101 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 07:56:21.819354 sshd[5889]: Connection closed by 172.24.4.1 port 46266 Jul 10 07:56:21.821445 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:21.833196 systemd[1]: sshd@20-172.24.4.91:22-172.24.4.1:46266.service: Deactivated successfully. Jul 10 07:56:21.835772 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 07:56:21.836006 systemd[1]: session-23.scope: Consumed 890ms CPU time, 71.7M memory peak. Jul 10 07:56:21.837637 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Jul 10 07:56:21.841825 systemd[1]: Started sshd@21-172.24.4.91:22-172.24.4.1:46268.service - OpenSSH per-connection server daemon (172.24.4.1:46268). Jul 10 07:56:21.844634 systemd-logind[1534]: Removed session 23. Jul 10 07:56:23.166060 sshd[5920]: Accepted publickey for core from 172.24.4.1 port 46268 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:23.169332 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:23.181326 systemd-logind[1534]: New session 24 of user core. Jul 10 07:56:23.185498 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 07:56:24.219869 sshd[5923]: Connection closed by 172.24.4.1 port 46268 Jul 10 07:56:24.221623 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:24.236903 systemd[1]: sshd@21-172.24.4.91:22-172.24.4.1:46268.service: Deactivated successfully. Jul 10 07:56:24.239513 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 07:56:24.244222 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Jul 10 07:56:24.251569 systemd[1]: Started sshd@22-172.24.4.91:22-172.24.4.1:55044.service - OpenSSH per-connection server daemon (172.24.4.1:55044). Jul 10 07:56:24.254831 systemd-logind[1534]: Removed session 24. Jul 10 07:56:25.371985 sshd[5933]: Accepted publickey for core from 172.24.4.1 port 55044 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:25.375783 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:25.390022 systemd-logind[1534]: New session 25 of user core. Jul 10 07:56:25.395211 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 07:56:26.221111 sshd[5936]: Connection closed by 172.24.4.1 port 55044 Jul 10 07:56:26.223445 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:26.232439 systemd[1]: sshd@22-172.24.4.91:22-172.24.4.1:55044.service: Deactivated successfully. Jul 10 07:56:26.239658 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 07:56:26.243732 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Jul 10 07:56:26.247228 systemd-logind[1534]: Removed session 25. Jul 10 07:56:26.879145 containerd[1564]: time="2025-07-10T07:56:26.879009740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"99c61a0b6497c56d2b50ef2f16d368d831f6fc749db3d53ea9dc3620a3c8a54f\" pid:5958 exited_at:{seconds:1752134186 nanos:877667729}" Jul 10 07:56:30.587612 containerd[1564]: time="2025-07-10T07:56:30.587540599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"7fcc0ca213fe831a4b96e9111dcb0a08281a2b19eaeae9d7bbf65429ac4ee6f1\" pid:5981 exited_at:{seconds:1752134190 nanos:587070302}" Jul 10 07:56:31.243195 systemd[1]: Started sshd@23-172.24.4.91:22-172.24.4.1:55052.service - OpenSSH per-connection server daemon (172.24.4.1:55052). Jul 10 07:56:32.452324 sshd[5997]: Accepted publickey for core from 172.24.4.1 port 55052 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:32.457010 sshd-session[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:32.474090 systemd-logind[1534]: New session 26 of user core. Jul 10 07:56:32.487330 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 07:56:33.333622 sshd[6002]: Connection closed by 172.24.4.1 port 55052 Jul 10 07:56:33.334938 sshd-session[5997]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:33.343601 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Jul 10 07:56:33.344112 systemd[1]: sshd@23-172.24.4.91:22-172.24.4.1:55052.service: Deactivated successfully. Jul 10 07:56:33.350794 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 07:56:33.360086 systemd-logind[1534]: Removed session 26. Jul 10 07:56:38.357532 systemd[1]: Started sshd@24-172.24.4.91:22-172.24.4.1:36164.service - OpenSSH per-connection server daemon (172.24.4.1:36164). Jul 10 07:56:39.555014 sshd[6014]: Accepted publickey for core from 172.24.4.1 port 36164 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:39.555923 sshd-session[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:39.561408 systemd-logind[1534]: New session 27 of user core. Jul 10 07:56:39.569105 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 07:56:40.345602 sshd[6019]: Connection closed by 172.24.4.1 port 36164 Jul 10 07:56:40.344754 sshd-session[6014]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:40.353212 systemd[1]: sshd@24-172.24.4.91:22-172.24.4.1:36164.service: Deactivated successfully. Jul 10 07:56:40.355855 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 07:56:40.359053 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Jul 10 07:56:40.362202 systemd-logind[1534]: Removed session 27. Jul 10 07:56:45.368081 systemd[1]: Started sshd@25-172.24.4.91:22-172.24.4.1:43474.service - OpenSSH per-connection server daemon (172.24.4.1:43474). Jul 10 07:56:45.527472 containerd[1564]: time="2025-07-10T07:56:45.527233712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"a42a01c4625db95a25b6d64d96a5a63a0c1fd79866e89880df8a1cc48f310062\" pid:6049 exited_at:{seconds:1752134205 nanos:526301025}" Jul 10 07:56:45.612670 containerd[1564]: time="2025-07-10T07:56:45.612568154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480560a02e60c64b285809a09ce204e4540a99a2a1a3283cf939c3c56dc3b2ab\" id:\"3df0dcc3908905beceef955ac859a653f5b06af448eee105e8553d224911e672\" pid:6073 exited_at:{seconds:1752134205 nanos:612195261}" Jul 10 07:56:46.567060 sshd[6034]: Accepted publickey for core from 172.24.4.1 port 43474 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:46.572923 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:46.590157 systemd-logind[1534]: New session 28 of user core. Jul 10 07:56:46.603487 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 07:56:47.326817 sshd[6083]: Connection closed by 172.24.4.1 port 43474 Jul 10 07:56:47.329489 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:47.345653 systemd[1]: sshd@25-172.24.4.91:22-172.24.4.1:43474.service: Deactivated successfully. Jul 10 07:56:47.357649 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 07:56:47.366098 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Jul 10 07:56:47.371780 systemd-logind[1534]: Removed session 28. Jul 10 07:56:48.135410 containerd[1564]: time="2025-07-10T07:56:48.135249869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8226fd8317aafebd48714fb76ab843b7dbef9a0505bdd28002646b4ee6ee5007\" id:\"7e27beba5b2ec5bfd0bb57b99752ed64e029df4cc555c4e25fceb3983114b5da\" pid:6107 exited_at:{seconds:1752134208 nanos:134574145}" Jul 10 07:56:52.355265 systemd[1]: Started sshd@26-172.24.4.91:22-172.24.4.1:43480.service - OpenSSH per-connection server daemon (172.24.4.1:43480). Jul 10 07:56:53.505072 sshd[6117]: Accepted publickey for core from 172.24.4.1 port 43480 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:56:53.507027 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:56:53.515087 systemd-logind[1534]: New session 29 of user core. Jul 10 07:56:53.524259 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 07:56:54.206324 sshd[6120]: Connection closed by 172.24.4.1 port 43480 Jul 10 07:56:54.208287 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Jul 10 07:56:54.222226 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Jul 10 07:56:54.222531 systemd[1]: sshd@26-172.24.4.91:22-172.24.4.1:43480.service: Deactivated successfully. Jul 10 07:56:54.230935 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 07:56:54.238240 systemd-logind[1534]: Removed session 29. Jul 10 07:56:59.228843 systemd[1]: Started sshd@27-172.24.4.91:22-172.24.4.1:46270.service - OpenSSH per-connection server daemon (172.24.4.1:46270). Jul 10 07:57:00.442712 sshd[6132]: Accepted publickey for core from 172.24.4.1 port 46270 ssh2: RSA SHA256:iigIp8oKXc41BDq8AMeyj9Pf4soP6CaUx1R17CYf3lM Jul 10 07:57:00.444593 containerd[1564]: time="2025-07-10T07:57:00.443618722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bf39a6acdbaf5ee4727e57ad26763ce402aa84f84e0a8db905ad9e439bb8af5\" id:\"48c17f2cbd5276d7efa89c8dda8dcbe0ae28535713e226b85dadfd5d3de5d9c6\" pid:6146 exited_at:{seconds:1752134220 nanos:441780732}" Jul 10 07:57:00.445272 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 07:57:00.452549 systemd-logind[1534]: New session 30 of user core. Jul 10 07:57:00.458116 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 07:57:01.364672 sshd[6158]: Connection closed by 172.24.4.1 port 46270 Jul 10 07:57:01.365447 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Jul 10 07:57:01.378878 systemd[1]: sshd@27-172.24.4.91:22-172.24.4.1:46270.service: Deactivated successfully. Jul 10 07:57:01.385501 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 07:57:01.389688 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Jul 10 07:57:01.395522 systemd-logind[1534]: Removed session 30.