Jul 7 01:12:04.117248 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 01:12:04.117274 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:12:04.117284 kernel: BIOS-provided physical RAM map: Jul 7 01:12:04.117291 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 01:12:04.117298 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 01:12:04.117308 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 01:12:04.117316 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 7 01:12:04.117324 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 7 01:12:04.117331 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 01:12:04.117338 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 01:12:04.117345 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 7 01:12:04.117353 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 01:12:04.117360 kernel: NX (Execute Disable) protection: active Jul 7 01:12:04.117367 kernel: APIC: Static calls initialized Jul 7 01:12:04.117378 kernel: SMBIOS 3.0.0 present. Jul 7 01:12:04.117386 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 7 01:12:04.117393 kernel: Hypervisor detected: KVM Jul 7 01:12:04.117401 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 01:12:04.117409 kernel: kvm-clock: using sched offset of 3394847482 cycles Jul 7 01:12:04.117419 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 01:12:04.117427 kernel: tsc: Detected 1996.249 MHz processor Jul 7 01:12:04.117435 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 01:12:04.117443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 01:12:04.117451 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 7 01:12:04.117459 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 01:12:04.117467 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 01:12:04.117475 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 7 01:12:04.117482 kernel: ACPI: Early table checksum verification disabled Jul 7 01:12:04.117492 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 7 01:12:04.117500 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:12:04.117508 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:12:04.117516 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:12:04.117523 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 7 01:12:04.117531 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:12:04.117539 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:12:04.117547 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 7 01:12:04.117555 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 7 01:12:04.117565 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 7 01:12:04.117573 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 7 01:12:04.117581 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 7 01:12:04.117592 kernel: No NUMA configuration found Jul 7 01:12:04.117601 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 7 01:12:04.117609 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jul 7 01:12:04.117619 kernel: Zone ranges: Jul 7 01:12:04.117648 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 01:12:04.117658 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 01:12:04.117666 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:12:04.117674 kernel: Movable zone start for each node Jul 7 01:12:04.117682 kernel: Early memory node ranges Jul 7 01:12:04.117690 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 01:12:04.117698 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 7 01:12:04.117707 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:12:04.117718 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 7 01:12:04.117726 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 01:12:04.117734 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 01:12:04.117742 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 01:12:04.117750 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 01:12:04.117759 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 01:12:04.117767 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 01:12:04.117775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 01:12:04.117783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 01:12:04.117794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 01:12:04.117802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 01:12:04.117810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 01:12:04.117818 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 01:12:04.117826 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 01:12:04.117834 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 01:12:04.117842 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 7 01:12:04.117850 kernel: Booting paravirtualized kernel on KVM Jul 7 01:12:04.117859 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 01:12:04.117869 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 01:12:04.117878 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 01:12:04.117886 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 01:12:04.117894 kernel: pcpu-alloc: [0] 0 1 Jul 7 01:12:04.117903 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 01:12:04.117914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:12:04.117923 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 01:12:04.117933 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 01:12:04.117942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:12:04.117950 kernel: Fallback order for Node 0: 0 Jul 7 01:12:04.117958 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 7 01:12:04.117966 kernel: Policy zone: Normal Jul 7 01:12:04.117974 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 01:12:04.117982 kernel: software IO TLB: area num 2. Jul 7 01:12:04.117991 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 227308K reserved, 0K cma-reserved) Jul 7 01:12:04.117999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 01:12:04.118009 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 01:12:04.118017 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 01:12:04.118025 kernel: Dynamic Preempt: voluntary Jul 7 01:12:04.118033 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 01:12:04.118042 kernel: rcu: RCU event tracing is enabled. Jul 7 01:12:04.118051 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 01:12:04.118059 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 01:12:04.118067 kernel: Rude variant of Tasks RCU enabled. Jul 7 01:12:04.118075 kernel: Tracing variant of Tasks RCU enabled. Jul 7 01:12:04.118083 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 01:12:04.118095 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 01:12:04.118103 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 01:12:04.118111 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 01:12:04.118119 kernel: Console: colour VGA+ 80x25 Jul 7 01:12:04.118127 kernel: printk: console [tty0] enabled Jul 7 01:12:04.118136 kernel: printk: console [ttyS0] enabled Jul 7 01:12:04.118144 kernel: ACPI: Core revision 20230628 Jul 7 01:12:04.118152 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 01:12:04.118160 kernel: x2apic enabled Jul 7 01:12:04.118171 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 01:12:04.118179 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 01:12:04.118188 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 01:12:04.118196 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 7 01:12:04.118204 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 01:12:04.118212 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 01:12:04.118220 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 01:12:04.118229 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 01:12:04.118237 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 01:12:04.118247 kernel: Speculative Store Bypass: Vulnerable Jul 7 01:12:04.118255 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 7 01:12:04.118263 kernel: Freeing SMP alternatives memory: 32K Jul 7 01:12:04.118272 kernel: pid_max: default: 32768 minimum: 301 Jul 7 01:12:04.118288 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 01:12:04.118298 kernel: landlock: Up and running. Jul 7 01:12:04.118306 kernel: SELinux: Initializing. Jul 7 01:12:04.118315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:12:04.118324 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:12:04.118332 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 7 01:12:04.118341 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:12:04.118352 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:12:04.118361 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:12:04.118370 kernel: Performance Events: AMD PMU driver. Jul 7 01:12:04.118378 kernel: ... version: 0 Jul 7 01:12:04.118387 kernel: ... bit width: 48 Jul 7 01:12:04.118398 kernel: ... generic registers: 4 Jul 7 01:12:04.118406 kernel: ... value mask: 0000ffffffffffff Jul 7 01:12:04.118415 kernel: ... max period: 00007fffffffffff Jul 7 01:12:04.118424 kernel: ... fixed-purpose events: 0 Jul 7 01:12:04.118432 kernel: ... event mask: 000000000000000f Jul 7 01:12:04.118441 kernel: signal: max sigframe size: 1440 Jul 7 01:12:04.118449 kernel: rcu: Hierarchical SRCU implementation. Jul 7 01:12:04.118458 kernel: rcu: Max phase no-delay instances is 400. Jul 7 01:12:04.118466 kernel: smp: Bringing up secondary CPUs ... Jul 7 01:12:04.118475 kernel: smpboot: x86: Booting SMP configuration: Jul 7 01:12:04.118486 kernel: .... node #0, CPUs: #1 Jul 7 01:12:04.118494 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 01:12:04.118503 kernel: smpboot: Max logical packages: 2 Jul 7 01:12:04.118511 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 7 01:12:04.118520 kernel: devtmpfs: initialized Jul 7 01:12:04.118528 kernel: x86/mm: Memory block size: 128MB Jul 7 01:12:04.118537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 01:12:04.118546 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 01:12:04.118554 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 01:12:04.118566 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 01:12:04.118574 kernel: audit: initializing netlink subsys (disabled) Jul 7 01:12:04.118583 kernel: audit: type=2000 audit(1751850723.206:1): state=initialized audit_enabled=0 res=1 Jul 7 01:12:04.118591 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 01:12:04.118600 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 01:12:04.118609 kernel: cpuidle: using governor menu Jul 7 01:12:04.118617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 01:12:04.118626 kernel: dca service started, version 1.12.1 Jul 7 01:12:04.120671 kernel: PCI: Using configuration type 1 for base access Jul 7 01:12:04.120687 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 01:12:04.120696 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 01:12:04.120705 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 01:12:04.120713 kernel: ACPI: Added _OSI(Module Device) Jul 7 01:12:04.120722 kernel: ACPI: Added _OSI(Processor Device) Jul 7 01:12:04.120731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 01:12:04.120739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 01:12:04.120748 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 01:12:04.120756 kernel: ACPI: Interpreter enabled Jul 7 01:12:04.120768 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 01:12:04.120776 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 01:12:04.120785 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 01:12:04.120794 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 01:12:04.120803 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 01:12:04.120826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 01:12:04.121005 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:12:04.121104 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 01:12:04.121196 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 01:12:04.121209 kernel: acpiphp: Slot [3] registered Jul 7 01:12:04.121218 kernel: acpiphp: Slot [4] registered Jul 7 01:12:04.121227 kernel: acpiphp: Slot [5] registered Jul 7 01:12:04.121236 kernel: acpiphp: Slot [6] registered Jul 7 01:12:04.121244 kernel: acpiphp: Slot [7] registered Jul 7 01:12:04.121253 kernel: acpiphp: Slot [8] registered Jul 7 01:12:04.121261 kernel: acpiphp: Slot [9] registered Jul 7 01:12:04.121273 kernel: acpiphp: Slot [10] registered Jul 7 01:12:04.121281 kernel: acpiphp: Slot [11] registered Jul 7 01:12:04.121290 kernel: acpiphp: Slot [12] registered Jul 7 01:12:04.121298 kernel: acpiphp: Slot [13] registered Jul 7 01:12:04.121307 kernel: acpiphp: Slot [14] registered Jul 7 01:12:04.121315 kernel: acpiphp: Slot [15] registered Jul 7 01:12:04.121324 kernel: acpiphp: Slot [16] registered Jul 7 01:12:04.121332 kernel: acpiphp: Slot [17] registered Jul 7 01:12:04.121341 kernel: acpiphp: Slot [18] registered Jul 7 01:12:04.121349 kernel: acpiphp: Slot [19] registered Jul 7 01:12:04.121361 kernel: acpiphp: Slot [20] registered Jul 7 01:12:04.121370 kernel: acpiphp: Slot [21] registered Jul 7 01:12:04.121378 kernel: acpiphp: Slot [22] registered Jul 7 01:12:04.121387 kernel: acpiphp: Slot [23] registered Jul 7 01:12:04.121395 kernel: acpiphp: Slot [24] registered Jul 7 01:12:04.121404 kernel: acpiphp: Slot [25] registered Jul 7 01:12:04.121412 kernel: acpiphp: Slot [26] registered Jul 7 01:12:04.121421 kernel: acpiphp: Slot [27] registered Jul 7 01:12:04.121429 kernel: acpiphp: Slot [28] registered Jul 7 01:12:04.121440 kernel: acpiphp: Slot [29] registered Jul 7 01:12:04.121448 kernel: acpiphp: Slot [30] registered Jul 7 01:12:04.121456 kernel: acpiphp: Slot [31] registered Jul 7 01:12:04.121465 kernel: PCI host bridge to bus 0000:00 Jul 7 01:12:04.121560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 01:12:04.121667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 01:12:04.121754 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 01:12:04.121836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 01:12:04.121923 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 7 01:12:04.122004 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 01:12:04.122111 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 7 01:12:04.122212 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 7 01:12:04.122312 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 7 01:12:04.122402 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 7 01:12:04.122498 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 7 01:12:04.122589 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 7 01:12:04.124733 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 7 01:12:04.124839 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 7 01:12:04.124938 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 7 01:12:04.125028 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 01:12:04.125116 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 01:12:04.125226 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 7 01:12:04.125319 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 7 01:12:04.125410 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jul 7 01:12:04.125501 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 7 01:12:04.125592 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 7 01:12:04.125702 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 01:12:04.125809 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 01:12:04.125904 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 7 01:12:04.125995 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 7 01:12:04.126085 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jul 7 01:12:04.126175 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 7 01:12:04.126271 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 01:12:04.126363 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 01:12:04.126458 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 7 01:12:04.126548 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 7 01:12:04.128693 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 7 01:12:04.128796 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 7 01:12:04.128902 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 7 01:12:04.128998 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 01:12:04.129087 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 7 01:12:04.129180 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jul 7 01:12:04.129267 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jul 7 01:12:04.129281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 01:12:04.129290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 01:12:04.129299 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 01:12:04.129307 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 01:12:04.129316 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 01:12:04.129325 kernel: iommu: Default domain type: Translated Jul 7 01:12:04.129333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 01:12:04.129346 kernel: PCI: Using ACPI for IRQ routing Jul 7 01:12:04.129355 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 01:12:04.129364 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 01:12:04.129372 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 7 01:12:04.129459 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 01:12:04.129546 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 01:12:04.129651 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 01:12:04.129665 kernel: vgaarb: loaded Jul 7 01:12:04.129677 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 01:12:04.129686 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 01:12:04.129695 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 01:12:04.129703 kernel: pnp: PnP ACPI init Jul 7 01:12:04.129797 kernel: pnp 00:03: [dma 2] Jul 7 01:12:04.129811 kernel: pnp: PnP ACPI: found 5 devices Jul 7 01:12:04.129820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 01:12:04.129829 kernel: NET: Registered PF_INET protocol family Jul 7 01:12:04.129838 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 01:12:04.129850 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 01:12:04.129859 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 01:12:04.129867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 01:12:04.129876 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 01:12:04.129885 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 01:12:04.129893 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:12:04.129902 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:12:04.129911 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 01:12:04.129920 kernel: NET: Registered PF_XDP protocol family Jul 7 01:12:04.130003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 01:12:04.130083 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 01:12:04.130163 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 01:12:04.130243 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 7 01:12:04.130321 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 7 01:12:04.130413 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 01:12:04.130504 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 01:12:04.130522 kernel: PCI: CLS 0 bytes, default 64 Jul 7 01:12:04.130531 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 01:12:04.130540 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 7 01:12:04.130549 kernel: Initialise system trusted keyrings Jul 7 01:12:04.130557 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 01:12:04.130566 kernel: Key type asymmetric registered Jul 7 01:12:04.130575 kernel: Asymmetric key parser 'x509' registered Jul 7 01:12:04.130583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 01:12:04.130593 kernel: io scheduler mq-deadline registered Jul 7 01:12:04.130604 kernel: io scheduler kyber registered Jul 7 01:12:04.130613 kernel: io scheduler bfq registered Jul 7 01:12:04.130622 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 01:12:04.132674 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 01:12:04.132685 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 01:12:04.132694 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 01:12:04.132703 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 01:12:04.132712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 01:12:04.132720 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 01:12:04.132733 kernel: random: crng init done Jul 7 01:12:04.132742 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 01:12:04.132750 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 01:12:04.132759 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 01:12:04.132875 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 01:12:04.132890 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 01:12:04.132969 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 01:12:04.133049 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T01:12:03 UTC (1751850723) Jul 7 01:12:04.133134 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 01:12:04.133148 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 01:12:04.133157 kernel: NET: Registered PF_INET6 protocol family Jul 7 01:12:04.133165 kernel: Segment Routing with IPv6 Jul 7 01:12:04.133174 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 01:12:04.133183 kernel: NET: Registered PF_PACKET protocol family Jul 7 01:12:04.133191 kernel: Key type dns_resolver registered Jul 7 01:12:04.133200 kernel: IPI shorthand broadcast: enabled Jul 7 01:12:04.133209 kernel: sched_clock: Marking stable (992008463, 180115584)->(1210636529, -38512482) Jul 7 01:12:04.133221 kernel: registered taskstats version 1 Jul 7 01:12:04.133230 kernel: Loading compiled-in X.509 certificates Jul 7 01:12:04.133238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 01:12:04.133247 kernel: Key type .fscrypt registered Jul 7 01:12:04.133256 kernel: Key type fscrypt-provisioning registered Jul 7 01:12:04.133265 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 01:12:04.133273 kernel: ima: Allocated hash algorithm: sha1 Jul 7 01:12:04.133282 kernel: ima: No architecture policies found Jul 7 01:12:04.133291 kernel: clk: Disabling unused clocks Jul 7 01:12:04.133301 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 01:12:04.133310 kernel: Write protecting the kernel read-only data: 36864k Jul 7 01:12:04.133319 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 01:12:04.133328 kernel: Run /init as init process Jul 7 01:12:04.133336 kernel: with arguments: Jul 7 01:12:04.133345 kernel: /init Jul 7 01:12:04.133353 kernel: with environment: Jul 7 01:12:04.133361 kernel: HOME=/ Jul 7 01:12:04.133370 kernel: TERM=linux Jul 7 01:12:04.133380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 01:12:04.133392 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:12:04.133403 systemd[1]: Detected virtualization kvm. Jul 7 01:12:04.133413 systemd[1]: Detected architecture x86-64. Jul 7 01:12:04.133422 systemd[1]: Running in initrd. Jul 7 01:12:04.133431 systemd[1]: No hostname configured, using default hostname. Jul 7 01:12:04.133440 systemd[1]: Hostname set to . Jul 7 01:12:04.133452 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:12:04.133461 systemd[1]: Queued start job for default target initrd.target. Jul 7 01:12:04.133471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:12:04.133480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:12:04.133490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 01:12:04.133500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:12:04.133509 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 01:12:04.133529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 01:12:04.133542 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 01:12:04.133552 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 01:12:04.133562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:12:04.133572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:12:04.133581 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:12:04.133593 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:12:04.133603 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:12:04.133612 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:12:04.133622 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:12:04.133646 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:12:04.133656 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:12:04.133666 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 01:12:04.133676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:12:04.133688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:12:04.133698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:12:04.133707 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:12:04.133717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 01:12:04.133727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:12:04.133736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 01:12:04.133746 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 01:12:04.133755 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:12:04.133765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:12:04.133777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:12:04.133787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 01:12:04.133796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:12:04.133806 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 01:12:04.133834 systemd-journald[184]: Collecting audit messages is disabled. Jul 7 01:12:04.133860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:12:04.133871 systemd-journald[184]: Journal started Jul 7 01:12:04.133896 systemd-journald[184]: Runtime Journal (/run/log/journal/442b63a357d948b981c95320b4040032) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:12:04.100131 systemd-modules-load[185]: Inserted module 'overlay' Jul 7 01:12:04.171540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 01:12:04.171581 kernel: Bridge firewalling registered Jul 7 01:12:04.146018 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 7 01:12:04.174229 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:12:04.175031 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:12:04.175758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:04.177087 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:12:04.183787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:12:04.186761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:12:04.188145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:12:04.194822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:12:04.207709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:12:04.209263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:12:04.211405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:12:04.216858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:12:04.218316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:12:04.227833 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 01:12:04.241643 dracut-cmdline[222]: dracut-dracut-053 Jul 7 01:12:04.242294 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:12:04.253084 systemd-resolved[220]: Positive Trust Anchors: Jul 7 01:12:04.253795 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:12:04.253838 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:12:04.259840 systemd-resolved[220]: Defaulting to hostname 'linux'. Jul 7 01:12:04.260750 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:12:04.261648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:12:04.332723 kernel: SCSI subsystem initialized Jul 7 01:12:04.343688 kernel: Loading iSCSI transport class v2.0-870. Jul 7 01:12:04.355880 kernel: iscsi: registered transport (tcp) Jul 7 01:12:04.377882 kernel: iscsi: registered transport (qla4xxx) Jul 7 01:12:04.377974 kernel: QLogic iSCSI HBA Driver Jul 7 01:12:04.439018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 01:12:04.446892 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 01:12:04.517799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 01:12:04.517895 kernel: device-mapper: uevent: version 1.0.3 Jul 7 01:12:04.524698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 01:12:04.584709 kernel: raid6: sse2x4 gen() 5145 MB/s Jul 7 01:12:04.603710 kernel: raid6: sse2x2 gen() 5966 MB/s Jul 7 01:12:04.622021 kernel: raid6: sse2x1 gen() 9531 MB/s Jul 7 01:12:04.622097 kernel: raid6: using algorithm sse2x1 gen() 9531 MB/s Jul 7 01:12:04.641069 kernel: raid6: .... xor() 7338 MB/s, rmw enabled Jul 7 01:12:04.641131 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 01:12:04.662894 kernel: xor: measuring software checksum speed Jul 7 01:12:04.662957 kernel: prefetch64-sse : 18519 MB/sec Jul 7 01:12:04.666245 kernel: generic_sse : 15441 MB/sec Jul 7 01:12:04.666306 kernel: xor: using function: prefetch64-sse (18519 MB/sec) Jul 7 01:12:04.841766 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 01:12:04.858981 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:12:04.868922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:12:04.899567 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jul 7 01:12:04.910506 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:12:04.921982 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 01:12:04.964167 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jul 7 01:12:05.018819 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:12:05.027941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:12:05.093345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:12:05.104965 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 01:12:05.136442 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 01:12:05.147976 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:12:05.151125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:12:05.155023 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:12:05.165873 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 01:12:05.184491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:12:05.202519 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 7 01:12:05.212653 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 7 01:12:05.217763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:12:05.217892 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:12:05.220191 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:12:05.220724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:12:05.220865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:05.233128 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 01:12:05.233158 kernel: GPT:17805311 != 20971519 Jul 7 01:12:05.233172 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 01:12:05.233186 kernel: GPT:17805311 != 20971519 Jul 7 01:12:05.233198 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 01:12:05.233211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:12:05.233224 kernel: libata version 3.00 loaded. Jul 7 01:12:05.221451 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:12:05.234003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:12:05.237942 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 01:12:05.238087 kernel: scsi host0: ata_piix Jul 7 01:12:05.243881 kernel: scsi host1: ata_piix Jul 7 01:12:05.244026 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 7 01:12:05.245809 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 7 01:12:05.272321 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Jul 7 01:12:05.285674 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (468) Jul 7 01:12:05.289070 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 01:12:05.309428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:05.316087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 01:12:05.325993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:12:05.330687 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 01:12:05.331270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 01:12:05.336760 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 01:12:05.339766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:12:05.350937 disk-uuid[503]: Primary Header is updated. Jul 7 01:12:05.350937 disk-uuid[503]: Secondary Entries is updated. Jul 7 01:12:05.350937 disk-uuid[503]: Secondary Header is updated. Jul 7 01:12:05.360497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:12:05.363659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:12:05.365609 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:12:06.378737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:12:06.379980 disk-uuid[505]: The operation has completed successfully. Jul 7 01:12:06.451063 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 01:12:06.451287 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 01:12:06.478775 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 01:12:06.495144 sh[526]: Success Jul 7 01:12:06.533682 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 7 01:12:06.632597 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 01:12:06.634702 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 01:12:06.648870 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 01:12:06.682694 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 01:12:06.682792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:12:06.687412 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 01:12:06.692397 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 01:12:06.696249 kernel: BTRFS info (device dm-0): using free space tree Jul 7 01:12:06.716203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 01:12:06.718665 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 01:12:06.727937 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 01:12:06.732935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 01:12:06.766591 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:12:06.766723 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:12:06.766757 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:12:06.776689 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:12:06.798782 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 01:12:06.805311 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:12:06.821791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 01:12:06.832033 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 01:12:06.906353 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:12:06.921824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:12:06.945913 systemd-networkd[709]: lo: Link UP Jul 7 01:12:06.945922 systemd-networkd[709]: lo: Gained carrier Jul 7 01:12:06.948005 systemd-networkd[709]: Enumeration completed Jul 7 01:12:06.948112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:12:06.948713 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:12:06.948717 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:12:06.948974 systemd[1]: Reached target network.target - Network. Jul 7 01:12:06.950141 systemd-networkd[709]: eth0: Link UP Jul 7 01:12:06.950145 systemd-networkd[709]: eth0: Gained carrier Jul 7 01:12:06.950152 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:12:06.971692 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.54/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:12:06.997798 ignition[627]: Ignition 2.19.0 Jul 7 01:12:06.997815 ignition[627]: Stage: fetch-offline Jul 7 01:12:06.999563 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:12:06.997857 ignition[627]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:06.997868 ignition[627]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:06.997972 ignition[627]: parsed url from cmdline: "" Jul 7 01:12:06.997977 ignition[627]: no config URL provided Jul 7 01:12:06.997983 ignition[627]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:12:06.997992 ignition[627]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:12:06.997998 ignition[627]: failed to fetch config: resource requires networking Jul 7 01:12:06.998213 ignition[627]: Ignition finished successfully Jul 7 01:12:07.010805 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 01:12:07.023801 ignition[718]: Ignition 2.19.0 Jul 7 01:12:07.023814 ignition[718]: Stage: fetch Jul 7 01:12:07.023990 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:07.024001 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:07.024090 ignition[718]: parsed url from cmdline: "" Jul 7 01:12:07.024094 ignition[718]: no config URL provided Jul 7 01:12:07.024099 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:12:07.024107 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:12:07.024228 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 01:12:07.024267 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 01:12:07.024297 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 01:12:07.210385 ignition[718]: GET result: OK Jul 7 01:12:07.211124 ignition[718]: parsing config with SHA512: 134c9a8ad49866ec2f9f8d8669001dd86aeba317518e9502dc635e4bf22caedbb58ed3246abe8f3a385b405bc66e4388f044203fcf170ad48d5a3e5e16e5bb51 Jul 7 01:12:07.219443 unknown[718]: fetched base config from "system" Jul 7 01:12:07.219465 unknown[718]: fetched base config from "system" Jul 7 01:12:07.220359 ignition[718]: fetch: fetch complete Jul 7 01:12:07.219484 unknown[718]: fetched user config from "openstack" Jul 7 01:12:07.220371 ignition[718]: fetch: fetch passed Jul 7 01:12:07.224266 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 01:12:07.220449 ignition[718]: Ignition finished successfully Jul 7 01:12:07.226090 systemd-resolved[220]: Detected conflict on linux IN A 172.24.4.54 Jul 7 01:12:07.226106 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 7 01:12:07.241850 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 01:12:07.275409 ignition[724]: Ignition 2.19.0 Jul 7 01:12:07.275437 ignition[724]: Stage: kargs Jul 7 01:12:07.275909 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:07.275936 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:07.280865 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 01:12:07.278268 ignition[724]: kargs: kargs passed Jul 7 01:12:07.278376 ignition[724]: Ignition finished successfully Jul 7 01:12:07.298022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 01:12:07.329627 ignition[730]: Ignition 2.19.0 Jul 7 01:12:07.329724 ignition[730]: Stage: disks Jul 7 01:12:07.330174 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:07.330211 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:07.335179 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 01:12:07.332610 ignition[730]: disks: disks passed Jul 7 01:12:07.339128 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 01:12:07.332762 ignition[730]: Ignition finished successfully Jul 7 01:12:07.341128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:12:07.343844 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:12:07.346946 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:12:07.349517 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:12:07.360091 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 01:12:07.394475 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 01:12:07.407779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 01:12:07.419817 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 01:12:07.576699 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 01:12:07.577178 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 01:12:07.578230 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 01:12:07.584860 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:12:07.588215 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 01:12:07.589432 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 01:12:07.595018 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 01:12:07.610952 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (746) Jul 7 01:12:07.610982 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:12:07.610996 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:12:07.611009 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:12:07.597497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 01:12:07.597538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:12:07.623649 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:12:07.624124 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 01:12:07.633935 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 01:12:07.639939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:12:07.743465 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 01:12:07.754537 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Jul 7 01:12:07.762811 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 01:12:07.771723 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 01:12:07.895254 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 01:12:07.900856 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 01:12:07.905144 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 01:12:07.919056 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 01:12:07.920623 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:12:07.961310 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 01:12:07.974423 ignition[862]: INFO : Ignition 2.19.0 Jul 7 01:12:07.976195 ignition[862]: INFO : Stage: mount Jul 7 01:12:07.976195 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:07.976195 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:07.981487 ignition[862]: INFO : mount: mount passed Jul 7 01:12:07.981487 ignition[862]: INFO : Ignition finished successfully Jul 7 01:12:07.978296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 01:12:08.014876 systemd-networkd[709]: eth0: Gained IPv6LL Jul 7 01:12:14.833320 coreos-metadata[748]: Jul 07 01:12:14.833 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:12:14.873433 coreos-metadata[748]: Jul 07 01:12:14.873 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:12:14.889550 coreos-metadata[748]: Jul 07 01:12:14.889 INFO Fetch successful Jul 7 01:12:14.891171 coreos-metadata[748]: Jul 07 01:12:14.889 INFO wrote hostname ci-4081-3-4-0-2961e92ed0.novalocal to /sysroot/etc/hostname Jul 7 01:12:14.894960 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 01:12:14.895284 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 01:12:14.905985 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 01:12:14.932986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:12:14.952721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (879) Jul 7 01:12:14.960507 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:12:14.960607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:12:14.967096 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:12:14.976723 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:12:14.982823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:12:15.031345 ignition[897]: INFO : Ignition 2.19.0 Jul 7 01:12:15.031345 ignition[897]: INFO : Stage: files Jul 7 01:12:15.035204 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:15.035204 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:15.035204 ignition[897]: DEBUG : files: compiled without relabeling support, skipping Jul 7 01:12:15.040689 ignition[897]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 01:12:15.040689 ignition[897]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 01:12:15.044586 ignition[897]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 01:12:15.044586 ignition[897]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 01:12:15.044586 ignition[897]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 01:12:15.043715 unknown[897]: wrote ssh authorized keys file for user: core Jul 7 01:12:15.052398 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 01:12:15.052398 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 01:12:15.129670 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 01:12:15.444504 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 01:12:15.444504 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 01:12:15.449484 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 01:12:16.225587 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 01:12:18.945383 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 01:12:18.945383 ignition[897]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 01:12:18.956855 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:12:18.956855 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:12:18.956855 ignition[897]: INFO : files: files passed Jul 7 01:12:18.956855 ignition[897]: INFO : Ignition finished successfully Jul 7 01:12:18.952093 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 01:12:18.964073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 01:12:18.968777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 01:12:18.970433 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 01:12:18.970549 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 01:12:18.996518 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:12:18.996518 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:12:19.003094 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:12:19.003416 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:12:19.006262 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 01:12:19.013987 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 01:12:19.039689 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 01:12:19.039914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 01:12:19.042250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 01:12:19.055261 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 01:12:19.056148 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 01:12:19.064028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 01:12:19.092404 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:12:19.100284 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 01:12:19.153294 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:12:19.155166 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:12:19.157969 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 01:12:19.160422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 01:12:19.160838 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:12:19.163788 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 01:12:19.167568 systemd[1]: Stopped target basic.target - Basic System. Jul 7 01:12:19.170929 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 01:12:19.173540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:12:19.176857 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 01:12:19.180209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 01:12:19.183452 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:12:19.186936 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 01:12:19.190164 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 01:12:19.193107 systemd[1]: Stopped target swap.target - Swaps. Jul 7 01:12:19.195254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 01:12:19.195781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:12:19.198101 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:12:19.199291 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:12:19.201472 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 01:12:19.201649 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:12:19.203365 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 01:12:19.203480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 01:12:19.205836 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 01:12:19.205962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:12:19.206887 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 01:12:19.206998 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 01:12:19.218909 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 01:12:19.219908 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 01:12:19.220048 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:12:19.223836 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 01:12:19.226070 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 01:12:19.226252 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:12:19.232964 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 01:12:19.233326 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:12:19.241775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 01:12:19.242178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 01:12:19.255906 ignition[950]: INFO : Ignition 2.19.0 Jul 7 01:12:19.255906 ignition[950]: INFO : Stage: umount Jul 7 01:12:19.255906 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:12:19.255906 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:12:19.255906 ignition[950]: INFO : umount: umount passed Jul 7 01:12:19.255906 ignition[950]: INFO : Ignition finished successfully Jul 7 01:12:19.259259 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 01:12:19.259387 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 01:12:19.262775 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 01:12:19.262927 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 01:12:19.264442 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 01:12:19.264531 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 01:12:19.267581 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 01:12:19.267663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 01:12:19.268757 systemd[1]: Stopped target network.target - Network. Jul 7 01:12:19.270046 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 01:12:19.270115 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:12:19.271421 systemd[1]: Stopped target paths.target - Path Units. Jul 7 01:12:19.275008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 01:12:19.278739 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:12:19.279446 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 01:12:19.280942 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 01:12:19.282245 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 01:12:19.282308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:12:19.283392 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 01:12:19.283443 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:12:19.284515 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 01:12:19.284585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 01:12:19.285720 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 01:12:19.285787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 01:12:19.287145 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 01:12:19.288699 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 01:12:19.291204 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 01:12:19.291831 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 01:12:19.291992 systemd-networkd[709]: eth0: DHCPv6 lease lost Jul 7 01:12:19.293799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 01:12:19.297141 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 01:12:19.297443 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 01:12:19.300015 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 01:12:19.300272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 01:12:19.303953 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 01:12:19.304296 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:12:19.305145 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 01:12:19.305203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 01:12:19.313781 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 01:12:19.314450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 01:12:19.314506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:12:19.316595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:12:19.316664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:12:19.318096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 01:12:19.318173 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 01:12:19.319914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 01:12:19.319961 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:12:19.321296 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:12:19.333016 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 01:12:19.333172 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:12:19.334692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 01:12:19.334761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 01:12:19.336799 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 01:12:19.336832 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:12:19.340536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 01:12:19.340582 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:12:19.343520 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 01:12:19.343567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 01:12:19.346294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:12:19.346371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:12:19.354938 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 01:12:19.355933 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 01:12:19.356029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:12:19.358126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:12:19.358202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:19.361835 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 01:12:19.361927 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 01:12:19.367360 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 01:12:19.367468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 01:12:19.368882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 01:12:19.381792 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 01:12:19.387973 systemd[1]: Switching root. Jul 7 01:12:19.419887 systemd-journald[184]: Journal stopped Jul 7 01:12:21.295132 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 7 01:12:21.295253 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 01:12:21.295284 kernel: SELinux: policy capability open_perms=1 Jul 7 01:12:21.295304 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 01:12:21.295319 kernel: SELinux: policy capability always_check_network=0 Jul 7 01:12:21.295340 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 01:12:21.295355 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 01:12:21.295374 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 01:12:21.295386 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 01:12:21.295397 kernel: audit: type=1403 audit(1751850740.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 01:12:21.295423 systemd[1]: Successfully loaded SELinux policy in 77.944ms. Jul 7 01:12:21.295457 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.636ms. Jul 7 01:12:21.295477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:12:21.295489 systemd[1]: Detected virtualization kvm. Jul 7 01:12:21.295507 systemd[1]: Detected architecture x86-64. Jul 7 01:12:21.295519 systemd[1]: Detected first boot. Jul 7 01:12:21.295534 systemd[1]: Hostname set to . Jul 7 01:12:21.295553 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:12:21.295571 zram_generator::config[993]: No configuration found. Jul 7 01:12:21.295591 systemd[1]: Populated /etc with preset unit settings. Jul 7 01:12:21.295607 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 01:12:21.295619 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 01:12:21.297659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 01:12:21.297686 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 01:12:21.297708 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 01:12:21.297722 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 01:12:21.297735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 01:12:21.297751 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 01:12:21.297772 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 01:12:21.297788 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 01:12:21.297800 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 01:12:21.297815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:12:21.297831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:12:21.297848 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 01:12:21.297861 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 01:12:21.297873 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 01:12:21.297893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:12:21.297911 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 01:12:21.297924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:12:21.297939 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 01:12:21.297955 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 01:12:21.297967 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 01:12:21.297979 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 01:12:21.297997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:12:21.298013 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:12:21.298025 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:12:21.298043 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:12:21.298058 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 01:12:21.298070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 01:12:21.298082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:12:21.298095 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:12:21.298107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:12:21.298124 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 01:12:21.298140 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 01:12:21.298152 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 01:12:21.298167 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 01:12:21.298179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:12:21.298191 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 01:12:21.298203 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 01:12:21.298219 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 01:12:21.298235 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 01:12:21.298253 systemd[1]: Reached target machines.target - Containers. Jul 7 01:12:21.298265 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 01:12:21.298281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:12:21.298293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:12:21.298308 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 01:12:21.298326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:12:21.298338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:12:21.298350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:12:21.298367 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 01:12:21.298380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:12:21.298396 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 01:12:21.298411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 01:12:21.298423 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 01:12:21.298435 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 01:12:21.298450 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 01:12:21.298465 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:12:21.298476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:12:21.298496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:12:21.298509 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 01:12:21.298521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:12:21.298536 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 01:12:21.298548 systemd[1]: Stopped verity-setup.service. Jul 7 01:12:21.298560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:12:21.298576 kernel: loop: module loaded Jul 7 01:12:21.298587 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 01:12:21.298600 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 01:12:21.298617 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 01:12:21.298642 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 01:12:21.298656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 01:12:21.298668 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 01:12:21.298687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:12:21.298699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 01:12:21.298711 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 01:12:21.298727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:12:21.298740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:12:21.298752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:12:21.298765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:12:21.298783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:12:21.298795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:12:21.298806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:12:21.298821 kernel: fuse: init (API version 7.39) Jul 7 01:12:21.298833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:12:21.298845 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 01:12:21.298861 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:12:21.298894 systemd-journald[1082]: Collecting audit messages is disabled. Jul 7 01:12:21.298957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 01:12:21.298971 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 01:12:21.298984 systemd-journald[1082]: Journal started Jul 7 01:12:21.299012 systemd-journald[1082]: Runtime Journal (/run/log/journal/442b63a357d948b981c95320b4040032) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:12:20.835714 systemd[1]: Queued start job for default target multi-user.target. Jul 7 01:12:20.872948 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 01:12:20.873748 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 01:12:21.301721 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:12:21.308652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 01:12:21.314720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 01:12:21.325652 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 01:12:21.332650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:12:21.340661 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 01:12:21.349492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:12:21.367662 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 01:12:21.371644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:12:21.371681 kernel: ACPI: bus type drm_connector registered Jul 7 01:12:21.376650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:12:21.394668 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 01:12:21.413757 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 01:12:21.413816 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:12:21.421040 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 01:12:21.422029 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:12:21.422191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:12:21.424538 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 01:12:21.424760 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 01:12:21.425948 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 01:12:21.427991 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 01:12:21.432724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 01:12:21.434141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:12:21.470668 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 01:12:21.475523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 01:12:21.490815 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 01:12:21.499173 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 01:12:21.504548 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 01:12:21.515077 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 01:12:21.517904 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 01:12:21.531670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:12:21.548273 kernel: loop1: detected capacity change from 0 to 8 Jul 7 01:12:21.545290 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 01:12:21.549422 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 01:12:21.550485 systemd-journald[1082]: Time spent on flushing to /var/log/journal/442b63a357d948b981c95320b4040032 is 47.492ms for 954 entries. Jul 7 01:12:21.550485 systemd-journald[1082]: System Journal (/var/log/journal/442b63a357d948b981c95320b4040032) is 8.0M, max 584.8M, 576.8M free. Jul 7 01:12:21.609169 systemd-journald[1082]: Received client request to flush runtime journal. Jul 7 01:12:21.609233 kernel: loop2: detected capacity change from 0 to 229808 Jul 7 01:12:21.550096 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 01:12:21.591822 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 01:12:21.616371 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 01:12:21.685416 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 01:12:21.692856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:12:21.700671 kernel: loop3: detected capacity change from 0 to 140768 Jul 7 01:12:21.745625 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jul 7 01:12:21.745896 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jul 7 01:12:21.755068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:12:21.800751 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 01:12:21.914670 kernel: loop5: detected capacity change from 0 to 8 Jul 7 01:12:21.943686 kernel: loop6: detected capacity change from 0 to 229808 Jul 7 01:12:22.006693 kernel: loop7: detected capacity change from 0 to 140768 Jul 7 01:12:22.085253 (sd-merge)[1152]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 01:12:22.086229 (sd-merge)[1152]: Merged extensions into '/usr'. Jul 7 01:12:22.100938 systemd[1]: Reloading requested from client PID 1107 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 01:12:22.100997 systemd[1]: Reloading... Jul 7 01:12:22.257720 zram_generator::config[1176]: No configuration found. Jul 7 01:12:22.510737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:12:22.574205 systemd[1]: Reloading finished in 472 ms. Jul 7 01:12:22.614975 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 01:12:22.617126 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 01:12:22.630897 systemd[1]: Starting ensure-sysext.service... Jul 7 01:12:22.635626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:12:22.644968 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:12:22.648478 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Jul 7 01:12:22.648498 systemd[1]: Reloading... Jul 7 01:12:22.683840 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 01:12:22.684511 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 01:12:22.687508 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 01:12:22.687864 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 01:12:22.687939 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 01:12:22.690362 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Jul 7 01:12:22.696282 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:12:22.696294 systemd-tmpfiles[1236]: Skipping /boot Jul 7 01:12:22.717498 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:12:22.717512 systemd-tmpfiles[1236]: Skipping /boot Jul 7 01:12:22.726685 zram_generator::config[1261]: No configuration found. Jul 7 01:12:22.739894 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 01:12:22.998834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1294) Jul 7 01:12:23.052622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:12:23.084107 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 7 01:12:23.099659 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 01:12:23.130658 kernel: ACPI: button: Power Button [PWRF] Jul 7 01:12:23.165577 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 01:12:23.163331 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 01:12:23.163726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:12:23.164847 systemd[1]: Reloading finished in 515 ms. Jul 7 01:12:23.186096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:12:23.189798 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 01:12:23.202354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:12:23.220901 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 01:12:23.240905 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 01:12:23.240993 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 01:12:23.246983 kernel: Console: switching to colour dummy device 80x25 Jul 7 01:12:23.249654 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 01:12:23.249722 kernel: [drm] features: -context_init Jul 7 01:12:23.252671 kernel: [drm] number of scanouts: 1 Jul 7 01:12:23.252715 kernel: [drm] number of cap sets: 0 Jul 7 01:12:23.257699 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 7 01:12:23.265741 systemd[1]: Finished ensure-sysext.service. Jul 7 01:12:23.277657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:12:23.293807 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 01:12:23.293895 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 01:12:23.294115 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:12:23.305332 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 01:12:23.312277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 01:12:23.312557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:12:23.318930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:12:23.323704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:12:23.331925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:12:23.341610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:12:23.344112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:12:23.346970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 01:12:23.350332 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 01:12:23.353827 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:12:23.367040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:12:23.371202 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 01:12:23.373880 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 01:12:23.377924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:12:23.379899 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:12:23.383115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:12:23.383356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:12:23.384053 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:12:23.384272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:12:23.385028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:12:23.385204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:12:23.386159 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:12:23.386273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:12:23.399889 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:12:23.400078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:12:23.402528 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 01:12:23.429101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:12:23.431047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:23.434991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:12:23.439489 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 01:12:23.441209 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 01:12:23.447847 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 01:12:23.478622 augenrules[1396]: No rules Jul 7 01:12:23.479865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 01:12:23.485005 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:12:23.504547 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 01:12:23.519851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 01:12:23.526507 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:12:23.556538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 01:12:23.565855 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 01:12:23.570605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:12:23.585014 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 01:12:23.601772 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 01:12:23.632137 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:12:23.756706 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 01:12:23.786873 systemd-resolved[1372]: Positive Trust Anchors: Jul 7 01:12:23.786897 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:12:23.786939 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:12:23.801447 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 01:12:23.802619 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 01:12:23.810540 systemd-resolved[1372]: Using system hostname 'ci-4081-3-4-0-2961e92ed0.novalocal'. Jul 7 01:12:23.812005 systemd-networkd[1370]: lo: Link UP Jul 7 01:12:23.812014 systemd-networkd[1370]: lo: Gained carrier Jul 7 01:12:23.812710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 01:12:23.814877 systemd-networkd[1370]: Enumeration completed Jul 7 01:12:23.817205 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:12:23.817287 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:12:23.818887 systemd-networkd[1370]: eth0: Link UP Jul 7 01:12:23.818894 systemd-networkd[1370]: eth0: Gained carrier Jul 7 01:12:23.818907 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:12:23.819766 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:12:23.820443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:12:23.821427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:12:23.822874 systemd[1]: Reached target network.target - Network. Jul 7 01:12:23.823446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:12:23.831702 systemd-networkd[1370]: eth0: DHCPv4 address 172.24.4.54/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:12:23.832837 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 01:12:23.834323 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Jul 7 01:12:23.836382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 01:12:23.836436 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:12:23.837110 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 01:12:23.840757 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 01:12:23.843527 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 01:12:23.844352 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 01:12:23.847321 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 01:12:23.847872 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 01:12:23.847911 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:12:23.848361 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:12:23.852169 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 01:12:23.857162 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 01:12:23.865568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 01:12:23.867624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 01:12:23.870234 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:12:23.871699 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:12:23.873022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:12:23.873055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:12:23.885112 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 01:12:23.888809 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 01:12:23.896853 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 01:12:23.906761 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 01:12:23.910810 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 01:12:23.912832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 01:12:23.917851 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 01:12:23.922944 jq[1431]: false Jul 7 01:12:23.932828 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 01:12:23.940142 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 01:12:23.959188 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 01:12:23.977915 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 01:12:23.983298 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 01:12:23.983916 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 01:12:23.989822 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 01:12:23.997455 extend-filesystems[1432]: Found loop4 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found loop5 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found loop6 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found loop7 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found vda Jul 7 01:12:23.997455 extend-filesystems[1432]: Found vda1 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found vda2 Jul 7 01:12:23.997455 extend-filesystems[1432]: Found vda3 Jul 7 01:12:23.999795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 01:12:24.009203 dbus-daemon[1428]: [system] SELinux support is enabled Jul 7 01:12:24.048162 extend-filesystems[1432]: Found usr Jul 7 01:12:24.048162 extend-filesystems[1432]: Found vda4 Jul 7 01:12:24.048162 extend-filesystems[1432]: Found vda6 Jul 7 01:12:24.048162 extend-filesystems[1432]: Found vda7 Jul 7 01:12:24.048162 extend-filesystems[1432]: Found vda9 Jul 7 01:12:24.048162 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 7 01:12:24.011009 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 01:12:24.024886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 01:12:24.025695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 01:12:24.085060 jq[1446]: true Jul 7 01:12:24.025999 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 01:12:24.026440 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 01:12:24.039171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 01:12:24.039367 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 01:12:24.080000 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 01:12:24.080059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 01:12:24.080799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 01:12:24.080829 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 01:12:24.125360 jq[1453]: true Jul 7 01:12:24.126597 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 7 01:12:24.132028 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 01:12:24.139658 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jul 7 01:12:24.167551 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 7 01:12:24.167654 update_engine[1445]: I20250707 01:12:24.165803 1445 main.cc:92] Flatcar Update Engine starting Jul 7 01:12:24.168330 tar[1451]: linux-amd64/LICENSE Jul 7 01:12:24.168330 tar[1451]: linux-amd64/helm Jul 7 01:12:24.179668 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 7 01:12:24.177227 systemd-logind[1444]: New seat seat0. Jul 7 01:12:24.262713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1294) Jul 7 01:12:24.262808 update_engine[1445]: I20250707 01:12:24.184020 1445 update_check_scheduler.cc:74] Next update check in 4m18s Jul 7 01:12:24.183216 systemd[1]: Started update-engine.service - Update Engine. Jul 7 01:12:24.202018 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 01:12:24.269440 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 01:12:24.269440 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 01:12:24.269440 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 7 01:12:24.269326 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 01:12:24.309540 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 7 01:12:24.269708 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jul 7 01:12:24.269756 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 01:12:24.273511 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 01:12:24.319000 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 01:12:24.411270 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:12:24.412230 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 01:12:24.429505 systemd[1]: Starting sshkeys.service... Jul 7 01:12:24.489307 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 01:12:24.501168 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 01:12:24.571945 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 01:12:24.738539 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 01:12:24.770837 containerd[1462]: time="2025-07-07T01:12:24.770744566Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 01:12:24.773570 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 01:12:24.821625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 01:12:24.829705 containerd[1462]: time="2025-07-07T01:12:24.829395516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.831292 containerd[1462]: time="2025-07-07T01:12:24.831258230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831379297Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831408361Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831616432Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831661326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831738340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831757596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831931012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831951740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831969594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.831983189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.832065905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832650 containerd[1462]: time="2025-07-07T01:12:24.832303230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832943 containerd[1462]: time="2025-07-07T01:12:24.832421622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:12:24.832943 containerd[1462]: time="2025-07-07T01:12:24.832441048Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 01:12:24.832943 containerd[1462]: time="2025-07-07T01:12:24.832533882Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 01:12:24.833048 containerd[1462]: time="2025-07-07T01:12:24.833029171Z" level=info msg="metadata content store policy set" policy=shared Jul 7 01:12:24.837384 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 01:12:24.844007 systemd[1]: Started sshd@0-172.24.4.54:22-172.24.4.1:50302.service - OpenSSH per-connection server daemon (172.24.4.1:50302). Jul 7 01:12:24.852227 containerd[1462]: time="2025-07-07T01:12:24.852166413Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 01:12:24.853856 containerd[1462]: time="2025-07-07T01:12:24.853804435Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 01:12:24.853921 containerd[1462]: time="2025-07-07T01:12:24.853864247Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 01:12:24.853921 containerd[1462]: time="2025-07-07T01:12:24.853906366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 01:12:24.854039 containerd[1462]: time="2025-07-07T01:12:24.853937815Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 01:12:24.854514 containerd[1462]: time="2025-07-07T01:12:24.854199516Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.854762362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.854999426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855043349Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855074778Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855103171Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855131424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855158875Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855193520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855238896Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855270655Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855299479Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855331098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855372636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856265 containerd[1462]: time="2025-07-07T01:12:24.855410928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855438881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855468105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855495597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855524060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855544348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855572852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.856739 containerd[1462]: time="2025-07-07T01:12:24.855600083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857684542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857728665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857781003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857809827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857841106Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857900107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857937747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.857959759Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.858040901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.858109700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 01:12:24.858130 containerd[1462]: time="2025-07-07T01:12:24.858136420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 01:12:24.858608 containerd[1462]: time="2025-07-07T01:12:24.858164733Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 01:12:24.858608 containerd[1462]: time="2025-07-07T01:12:24.858181865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.858608 containerd[1462]: time="2025-07-07T01:12:24.858212753Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 01:12:24.858608 containerd[1462]: time="2025-07-07T01:12:24.858245795Z" level=info msg="NRI interface is disabled by configuration." Jul 7 01:12:24.858608 containerd[1462]: time="2025-07-07T01:12:24.858267546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 01:12:24.859687 containerd[1462]: time="2025-07-07T01:12:24.859021059Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 01:12:24.859687 containerd[1462]: time="2025-07-07T01:12:24.859166021Z" level=info msg="Connect containerd service" Jul 7 01:12:24.859687 containerd[1462]: time="2025-07-07T01:12:24.859238837Z" level=info msg="using legacy CRI server" Jul 7 01:12:24.859687 containerd[1462]: time="2025-07-07T01:12:24.859260528Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 01:12:24.859687 containerd[1462]: time="2025-07-07T01:12:24.859455604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 01:12:24.860277 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 01:12:24.860603 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 01:12:24.875149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 01:12:24.876878 containerd[1462]: time="2025-07-07T01:12:24.876750891Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.877898604Z" level=info msg="Start subscribing containerd event" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.878018679Z" level=info msg="Start recovering state" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.878110291Z" level=info msg="Start event monitor" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.878128966Z" level=info msg="Start snapshots syncer" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.878149825Z" level=info msg="Start cni network conf syncer for default" Jul 7 01:12:24.878712 containerd[1462]: time="2025-07-07T01:12:24.878164152Z" level=info msg="Start streaming server" Jul 7 01:12:24.881940 containerd[1462]: time="2025-07-07T01:12:24.881907913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 01:12:24.882505 containerd[1462]: time="2025-07-07T01:12:24.882484054Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 01:12:24.889669 containerd[1462]: time="2025-07-07T01:12:24.889530309Z" level=info msg="containerd successfully booted in 0.119758s" Jul 7 01:12:24.891844 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 01:12:24.899026 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 01:12:24.911294 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 01:12:24.924308 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 01:12:24.928891 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 01:12:24.975073 systemd-networkd[1370]: eth0: Gained IPv6LL Jul 7 01:12:24.976066 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Jul 7 01:12:24.981367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 01:12:24.983288 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 01:12:24.998432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:12:25.008569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 01:12:25.047896 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 01:12:25.349691 tar[1451]: linux-amd64/README.md Jul 7 01:12:25.371956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 01:12:26.027122 sshd[1514]: Accepted publickey for core from 172.24.4.1 port 50302 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:26.032926 sshd[1514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:26.076894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 01:12:26.088838 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 01:12:26.098420 systemd-logind[1444]: New session 1 of user core. Jul 7 01:12:26.123820 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 01:12:26.135021 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 01:12:26.144087 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 01:12:26.305016 systemd[1542]: Queued start job for default target default.target. Jul 7 01:12:26.315679 systemd[1542]: Created slice app.slice - User Application Slice. Jul 7 01:12:26.315703 systemd[1542]: Reached target paths.target - Paths. Jul 7 01:12:26.315717 systemd[1542]: Reached target timers.target - Timers. Jul 7 01:12:26.319371 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 01:12:26.348107 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 01:12:26.348340 systemd[1542]: Reached target sockets.target - Sockets. Jul 7 01:12:26.348372 systemd[1542]: Reached target basic.target - Basic System. Jul 7 01:12:26.348456 systemd[1542]: Reached target default.target - Main User Target. Jul 7 01:12:26.348532 systemd[1542]: Startup finished in 195ms. Jul 7 01:12:26.348831 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 01:12:26.355864 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 01:12:26.812618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:12:26.830160 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:12:26.855613 systemd[1]: Started sshd@1-172.24.4.54:22-172.24.4.1:39286.service - OpenSSH per-connection server daemon (172.24.4.1:39286). Jul 7 01:12:27.965088 kubelet[1557]: E0707 01:12:27.964967 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:12:27.969063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:12:27.969266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:12:27.969693 systemd[1]: kubelet.service: Consumed 1.825s CPU time. Jul 7 01:12:28.028874 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 39286 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:28.033001 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:28.047171 systemd-logind[1444]: New session 2 of user core. Jul 7 01:12:28.062199 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 01:12:28.815065 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:28.827227 systemd[1]: sshd@1-172.24.4.54:22-172.24.4.1:39286.service: Deactivated successfully. Jul 7 01:12:28.830799 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 01:12:28.832820 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jul 7 01:12:28.843918 systemd[1]: Started sshd@2-172.24.4.54:22-172.24.4.1:39296.service - OpenSSH per-connection server daemon (172.24.4.1:39296). Jul 7 01:12:28.852333 systemd-logind[1444]: Removed session 2. Jul 7 01:12:30.035731 sshd[1574]: Accepted publickey for core from 172.24.4.1 port 39296 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:30.038771 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:30.051231 systemd-logind[1444]: New session 3 of user core. Jul 7 01:12:30.062101 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 01:12:30.106146 login[1521]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:12:30.108262 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:12:30.118738 systemd-logind[1444]: New session 5 of user core. Jul 7 01:12:30.135061 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 01:12:30.141567 systemd-logind[1444]: New session 4 of user core. Jul 7 01:12:30.148073 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 01:12:30.660139 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:30.665906 systemd[1]: sshd@2-172.24.4.54:22-172.24.4.1:39296.service: Deactivated successfully. Jul 7 01:12:30.669821 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 01:12:30.673109 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jul 7 01:12:30.675811 systemd-logind[1444]: Removed session 3. Jul 7 01:12:31.001694 coreos-metadata[1427]: Jul 07 01:12:31.001 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:12:31.053081 coreos-metadata[1427]: Jul 07 01:12:31.052 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 01:12:31.242873 coreos-metadata[1427]: Jul 07 01:12:31.242 INFO Fetch successful Jul 7 01:12:31.243338 coreos-metadata[1427]: Jul 07 01:12:31.243 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:12:31.256922 coreos-metadata[1427]: Jul 07 01:12:31.256 INFO Fetch successful Jul 7 01:12:31.257626 coreos-metadata[1427]: Jul 07 01:12:31.257 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 01:12:31.270915 coreos-metadata[1427]: Jul 07 01:12:31.270 INFO Fetch successful Jul 7 01:12:31.270915 coreos-metadata[1427]: Jul 07 01:12:31.270 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 01:12:31.285469 coreos-metadata[1427]: Jul 07 01:12:31.285 INFO Fetch successful Jul 7 01:12:31.285469 coreos-metadata[1427]: Jul 07 01:12:31.285 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 01:12:31.299624 coreos-metadata[1427]: Jul 07 01:12:31.299 INFO Fetch successful Jul 7 01:12:31.299624 coreos-metadata[1427]: Jul 07 01:12:31.299 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 01:12:31.314146 coreos-metadata[1427]: Jul 07 01:12:31.314 INFO Fetch successful Jul 7 01:12:31.374146 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 01:12:31.377188 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 01:12:31.744586 coreos-metadata[1489]: Jul 07 01:12:31.744 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:12:31.788138 coreos-metadata[1489]: Jul 07 01:12:31.788 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 01:12:31.804242 coreos-metadata[1489]: Jul 07 01:12:31.804 INFO Fetch successful Jul 7 01:12:31.804242 coreos-metadata[1489]: Jul 07 01:12:31.804 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 01:12:31.818793 coreos-metadata[1489]: Jul 07 01:12:31.818 INFO Fetch successful Jul 7 01:12:31.823905 unknown[1489]: wrote ssh authorized keys file for user: core Jul 7 01:12:31.868993 update-ssh-keys[1616]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:12:31.870114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 01:12:31.876525 systemd[1]: Finished sshkeys.service. Jul 7 01:12:31.879960 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 01:12:31.880605 systemd[1]: Startup finished in 1.225s (kernel) + 16.294s (initrd) + 11.827s (userspace) = 29.346s. Jul 7 01:12:38.018287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 01:12:38.029196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:12:38.404567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:12:38.415000 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:12:38.515961 kubelet[1627]: E0707 01:12:38.515844 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:12:38.523881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:12:38.524229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:12:40.686594 systemd[1]: Started sshd@3-172.24.4.54:22-172.24.4.1:50724.service - OpenSSH per-connection server daemon (172.24.4.1:50724). Jul 7 01:12:41.968138 sshd[1635]: Accepted publickey for core from 172.24.4.1 port 50724 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:41.972167 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:41.987325 systemd-logind[1444]: New session 6 of user core. Jul 7 01:12:41.994043 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 01:12:42.608604 sshd[1635]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:42.619855 systemd[1]: sshd@3-172.24.4.54:22-172.24.4.1:50724.service: Deactivated successfully. Jul 7 01:12:42.623035 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 01:12:42.624606 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jul 7 01:12:42.635296 systemd[1]: Started sshd@4-172.24.4.54:22-172.24.4.1:50732.service - OpenSSH per-connection server daemon (172.24.4.1:50732). Jul 7 01:12:42.638002 systemd-logind[1444]: Removed session 6. Jul 7 01:12:44.089576 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 50732 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:44.093293 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:44.143677 systemd-logind[1444]: New session 7 of user core. Jul 7 01:12:44.156319 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 01:12:44.797711 sshd[1642]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:44.811253 systemd[1]: sshd@4-172.24.4.54:22-172.24.4.1:50732.service: Deactivated successfully. Jul 7 01:12:44.816009 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 01:12:44.821296 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jul 7 01:12:44.829443 systemd[1]: Started sshd@5-172.24.4.54:22-172.24.4.1:42574.service - OpenSSH per-connection server daemon (172.24.4.1:42574). Jul 7 01:12:44.831989 systemd-logind[1444]: Removed session 7. Jul 7 01:12:46.204608 sshd[1649]: Accepted publickey for core from 172.24.4.1 port 42574 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:46.207787 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:46.218893 systemd-logind[1444]: New session 8 of user core. Jul 7 01:12:46.231988 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 01:12:46.851225 sshd[1649]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:46.863369 systemd[1]: sshd@5-172.24.4.54:22-172.24.4.1:42574.service: Deactivated successfully. Jul 7 01:12:46.866920 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 01:12:46.868878 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jul 7 01:12:46.881355 systemd[1]: Started sshd@6-172.24.4.54:22-172.24.4.1:42580.service - OpenSSH per-connection server daemon (172.24.4.1:42580). Jul 7 01:12:46.884572 systemd-logind[1444]: Removed session 8. Jul 7 01:12:48.354341 sshd[1656]: Accepted publickey for core from 172.24.4.1 port 42580 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:48.357516 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:48.370184 systemd-logind[1444]: New session 9 of user core. Jul 7 01:12:48.387067 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 01:12:48.767972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 01:12:48.775016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:12:48.830090 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 01:12:48.832067 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:12:48.870466 sudo[1660]: pam_unix(sudo:session): session closed for user root Jul 7 01:12:49.055126 sshd[1656]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:49.097898 systemd[1]: Started sshd@7-172.24.4.54:22-172.24.4.1:42588.service - OpenSSH per-connection server daemon (172.24.4.1:42588). Jul 7 01:12:49.099617 systemd[1]: sshd@6-172.24.4.54:22-172.24.4.1:42580.service: Deactivated successfully. Jul 7 01:12:49.107021 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 01:12:49.108576 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jul 7 01:12:49.111109 systemd-logind[1444]: Removed session 9. Jul 7 01:12:49.412119 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:12:49.412125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:12:49.530543 kubelet[1674]: E0707 01:12:49.530435 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:12:49.534017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:12:49.534362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:12:50.330476 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 42588 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:50.333959 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:50.346574 systemd-logind[1444]: New session 10 of user core. Jul 7 01:12:50.355969 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 01:12:50.761767 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 01:12:50.763551 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:12:50.773780 sudo[1683]: pam_unix(sudo:session): session closed for user root Jul 7 01:12:50.786471 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 01:12:50.787244 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:12:50.818383 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 01:12:50.834091 auditctl[1686]: No rules Jul 7 01:12:50.835004 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:12:50.835489 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 01:12:50.845361 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:12:50.929266 augenrules[1704]: No rules Jul 7 01:12:50.931546 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:12:50.933730 sudo[1682]: pam_unix(sudo:session): session closed for user root Jul 7 01:12:51.160174 sshd[1665]: pam_unix(sshd:session): session closed for user core Jul 7 01:12:51.174371 systemd[1]: sshd@7-172.24.4.54:22-172.24.4.1:42588.service: Deactivated successfully. Jul 7 01:12:51.178212 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 01:12:51.183072 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jul 7 01:12:51.190279 systemd[1]: Started sshd@8-172.24.4.54:22-172.24.4.1:42590.service - OpenSSH per-connection server daemon (172.24.4.1:42590). Jul 7 01:12:51.193880 systemd-logind[1444]: Removed session 10. Jul 7 01:12:52.377268 sshd[1712]: Accepted publickey for core from 172.24.4.1 port 42590 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:12:52.381000 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:12:52.393594 systemd-logind[1444]: New session 11 of user core. Jul 7 01:12:52.406136 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 01:12:52.848446 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 01:12:52.849255 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:12:53.944014 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 01:12:53.955110 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 01:12:54.655354 dockerd[1731]: time="2025-07-07T01:12:54.655250931Z" level=info msg="Starting up" Jul 7 01:12:54.885093 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2590247905-merged.mount: Deactivated successfully. Jul 7 01:12:54.917371 dockerd[1731]: time="2025-07-07T01:12:54.916739753Z" level=info msg="Loading containers: start." Jul 7 01:12:55.102715 kernel: Initializing XFRM netlink socket Jul 7 01:12:55.756973 systemd-resolved[1372]: Clock change detected. Flushing caches. Jul 7 01:12:55.757971 systemd-timesyncd[1374]: Contacted time server 67.217.246.127:123 (2.flatcar.pool.ntp.org). Jul 7 01:12:55.758042 systemd-timesyncd[1374]: Initial clock synchronization to Mon 2025-07-07 01:12:55.756899 UTC. Jul 7 01:12:55.798089 systemd-networkd[1370]: docker0: Link UP Jul 7 01:12:55.824294 dockerd[1731]: time="2025-07-07T01:12:55.824213718Z" level=info msg="Loading containers: done." Jul 7 01:12:55.852068 dockerd[1731]: time="2025-07-07T01:12:55.851816287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 01:12:55.852068 dockerd[1731]: time="2025-07-07T01:12:55.851959465Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 01:12:55.852068 dockerd[1731]: time="2025-07-07T01:12:55.852079320Z" level=info msg="Daemon has completed initialization" Jul 7 01:12:55.905791 dockerd[1731]: time="2025-07-07T01:12:55.904500165Z" level=info msg="API listen on /run/docker.sock" Jul 7 01:12:55.906425 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 01:12:56.484590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1697291284-merged.mount: Deactivated successfully. Jul 7 01:12:57.530921 containerd[1462]: time="2025-07-07T01:12:57.530057878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 01:12:58.429372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198569187.mount: Deactivated successfully. Jul 7 01:13:00.339212 containerd[1462]: time="2025-07-07T01:13:00.338912525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:00.340526 containerd[1462]: time="2025-07-07T01:13:00.340376170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jul 7 01:13:00.341743 containerd[1462]: time="2025-07-07T01:13:00.341678011Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:00.345356 containerd[1462]: time="2025-07-07T01:13:00.345300034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:00.347023 containerd[1462]: time="2025-07-07T01:13:00.346597508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.816259805s" Jul 7 01:13:00.347023 containerd[1462]: time="2025-07-07T01:13:00.346673420Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 01:13:00.347640 containerd[1462]: time="2025-07-07T01:13:00.347414099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 01:13:00.376408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 01:13:00.384383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:00.804777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:00.815211 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:13:00.885745 kubelet[1933]: E0707 01:13:00.884913 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:13:00.889378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:13:00.889771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:13:03.352198 containerd[1462]: time="2025-07-07T01:13:03.351628072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:03.357519 containerd[1462]: time="2025-07-07T01:13:03.357096318Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jul 7 01:13:03.359751 containerd[1462]: time="2025-07-07T01:13:03.359596607Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:03.371670 containerd[1462]: time="2025-07-07T01:13:03.371512455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:03.375960 containerd[1462]: time="2025-07-07T01:13:03.374944632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.027419033s" Jul 7 01:13:03.375960 containerd[1462]: time="2025-07-07T01:13:03.375290601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 01:13:03.378379 containerd[1462]: time="2025-07-07T01:13:03.378214405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 01:13:05.237446 containerd[1462]: time="2025-07-07T01:13:05.237291002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:05.238944 containerd[1462]: time="2025-07-07T01:13:05.238895321Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jul 7 01:13:05.240316 containerd[1462]: time="2025-07-07T01:13:05.240231577Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:05.248875 containerd[1462]: time="2025-07-07T01:13:05.248003193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:05.251781 containerd[1462]: time="2025-07-07T01:13:05.251748277Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.873431039s" Jul 7 01:13:05.252128 containerd[1462]: time="2025-07-07T01:13:05.252096931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 01:13:05.252637 containerd[1462]: time="2025-07-07T01:13:05.252601066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 01:13:06.866314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779798366.mount: Deactivated successfully. Jul 7 01:13:08.080067 containerd[1462]: time="2025-07-07T01:13:08.078765938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:08.080067 containerd[1462]: time="2025-07-07T01:13:08.080104769Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jul 7 01:13:08.082658 containerd[1462]: time="2025-07-07T01:13:08.082249291Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:08.085492 containerd[1462]: time="2025-07-07T01:13:08.085448621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:08.086424 containerd[1462]: time="2025-07-07T01:13:08.086384005Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.833637877s" Jul 7 01:13:08.086487 containerd[1462]: time="2025-07-07T01:13:08.086424371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 01:13:08.088235 containerd[1462]: time="2025-07-07T01:13:08.088198248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 01:13:09.039293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091155039.mount: Deactivated successfully. Jul 7 01:13:10.146242 update_engine[1445]: I20250707 01:13:10.146024 1445 update_attempter.cc:509] Updating boot flags... Jul 7 01:13:10.213925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2016) Jul 7 01:13:10.283130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2014) Jul 7 01:13:10.636714 containerd[1462]: time="2025-07-07T01:13:10.636614803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:10.639205 containerd[1462]: time="2025-07-07T01:13:10.639154065Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jul 7 01:13:10.640667 containerd[1462]: time="2025-07-07T01:13:10.640621086Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:10.646113 containerd[1462]: time="2025-07-07T01:13:10.646028428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:10.647480 containerd[1462]: time="2025-07-07T01:13:10.647433193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.559175103s" Jul 7 01:13:10.647551 containerd[1462]: time="2025-07-07T01:13:10.647495049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 01:13:10.648655 containerd[1462]: time="2025-07-07T01:13:10.648559525Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 01:13:11.127463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 01:13:11.140296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:11.249568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541826485.mount: Deactivated successfully. Jul 7 01:13:11.290777 containerd[1462]: time="2025-07-07T01:13:11.290698867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:11.411331 containerd[1462]: time="2025-07-07T01:13:11.411104348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 01:13:11.512093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:11.517027 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:13:11.621893 containerd[1462]: time="2025-07-07T01:13:11.620025148Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:11.632540 kubelet[2035]: E0707 01:13:11.632470 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:13:11.633171 containerd[1462]: time="2025-07-07T01:13:11.633095452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:11.636163 containerd[1462]: time="2025-07-07T01:13:11.636052478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 987.43776ms" Jul 7 01:13:11.636301 containerd[1462]: time="2025-07-07T01:13:11.636152646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 01:13:11.637428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:13:11.637795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:13:11.638350 containerd[1462]: time="2025-07-07T01:13:11.638294683Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 01:13:12.263325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877233924.mount: Deactivated successfully. Jul 7 01:13:15.551789 containerd[1462]: time="2025-07-07T01:13:15.551603112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:15.580074 containerd[1462]: time="2025-07-07T01:13:15.579940609Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jul 7 01:13:15.658457 containerd[1462]: time="2025-07-07T01:13:15.658211145Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:15.736028 containerd[1462]: time="2025-07-07T01:13:15.735406144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:15.741339 containerd[1462]: time="2025-07-07T01:13:15.740600316Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.102215083s" Jul 7 01:13:15.741339 containerd[1462]: time="2025-07-07T01:13:15.740701476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 01:13:20.579727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:20.594081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:20.638817 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-11.scope)... Jul 7 01:13:20.638877 systemd[1]: Reloading... Jul 7 01:13:20.755901 zram_generator::config[2163]: No configuration found. Jul 7 01:13:20.914378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:13:21.006125 systemd[1]: Reloading finished in 366 ms. Jul 7 01:13:21.066998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 01:13:21.067454 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 01:13:21.067722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:21.072094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:22.400218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:22.419616 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:13:22.515634 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:13:22.515634 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:13:22.515634 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:13:22.516588 kubelet[2230]: I0707 01:13:22.515750 2230 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:13:22.864910 kubelet[2230]: I0707 01:13:22.864697 2230 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 01:13:22.864910 kubelet[2230]: I0707 01:13:22.864732 2230 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:13:22.865256 kubelet[2230]: I0707 01:13:22.865030 2230 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 01:13:22.907473 kubelet[2230]: I0707 01:13:22.906052 2230 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:13:22.908689 kubelet[2230]: E0707 01:13:22.908631 2230 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 01:13:22.925210 kubelet[2230]: E0707 01:13:22.925117 2230 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:13:22.926364 kubelet[2230]: I0707 01:13:22.925645 2230 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:13:22.947364 kubelet[2230]: I0707 01:13:22.947306 2230 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:13:22.948703 kubelet[2230]: I0707 01:13:22.948623 2230 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:13:22.949667 kubelet[2230]: I0707 01:13:22.948971 2230 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-0-2961e92ed0.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:13:22.951378 kubelet[2230]: I0707 01:13:22.950663 2230 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:13:22.951378 kubelet[2230]: I0707 01:13:22.950728 2230 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 01:13:22.953666 kubelet[2230]: I0707 01:13:22.953473 2230 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:13:22.959725 kubelet[2230]: I0707 01:13:22.959682 2230 kubelet.go:480] "Attempting to sync node with API server" Jul 7 01:13:22.960238 kubelet[2230]: I0707 01:13:22.959969 2230 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:13:22.962148 kubelet[2230]: I0707 01:13:22.961993 2230 kubelet.go:386] "Adding apiserver pod source" Jul 7 01:13:22.967588 kubelet[2230]: I0707 01:13:22.967402 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:13:22.984894 kubelet[2230]: E0707 01:13:22.978098 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 01:13:22.984894 kubelet[2230]: E0707 01:13:22.978447 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-2961e92ed0.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 01:13:22.987693 kubelet[2230]: I0707 01:13:22.987624 2230 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:13:22.989902 kubelet[2230]: I0707 01:13:22.988907 2230 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 01:13:22.990650 kubelet[2230]: W0707 01:13:22.990591 2230 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 01:13:22.997899 kubelet[2230]: I0707 01:13:22.997340 2230 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:13:22.997899 kubelet[2230]: I0707 01:13:22.997480 2230 server.go:1289] "Started kubelet" Jul 7 01:13:23.001591 kubelet[2230]: I0707 01:13:23.001555 2230 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:13:23.004112 kubelet[2230]: I0707 01:13:23.004092 2230 server.go:317] "Adding debug handlers to kubelet server" Jul 7 01:13:23.008092 kubelet[2230]: I0707 01:13:23.007937 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:13:23.008582 kubelet[2230]: I0707 01:13:23.008550 2230 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:13:23.013913 kubelet[2230]: E0707 01:13:23.008852 2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.54:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-0-2961e92ed0.novalocal.184fd3060c24bdd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-0-2961e92ed0.novalocal,UID:ci-4081-3-4-0-2961e92ed0.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-2961e92ed0.novalocal,},FirstTimestamp:2025-07-07 01:13:22.997411283 +0000 UTC m=+0.560218641,LastTimestamp:2025-07-07 01:13:22.997411283 +0000 UTC m=+0.560218641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-2961e92ed0.novalocal,}" Jul 7 01:13:23.015119 kubelet[2230]: I0707 01:13:23.014312 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:13:23.016205 kubelet[2230]: I0707 01:13:23.016170 2230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:13:23.031159 kubelet[2230]: I0707 01:13:23.031091 2230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 01:13:23.031905 kubelet[2230]: E0707 01:13:23.031593 2230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" Jul 7 01:13:23.031905 kubelet[2230]: I0707 01:13:23.031679 2230 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:13:23.032815 kubelet[2230]: I0707 01:13:23.032783 2230 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:13:23.033218 kubelet[2230]: I0707 01:13:23.033193 2230 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:13:23.034145 kubelet[2230]: E0707 01:13:23.034105 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 01:13:23.035631 kubelet[2230]: I0707 01:13:23.035578 2230 factory.go:223] Registration of the systemd container factory successfully Jul 7 01:13:23.035846 kubelet[2230]: I0707 01:13:23.035673 2230 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:13:23.047371 kubelet[2230]: E0707 01:13:23.047329 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="200ms" Jul 7 01:13:23.049138 kubelet[2230]: E0707 01:13:23.048660 2230 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:13:23.049138 kubelet[2230]: I0707 01:13:23.048666 2230 factory.go:223] Registration of the containerd container factory successfully Jul 7 01:13:23.061903 kubelet[2230]: I0707 01:13:23.061849 2230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 01:13:23.062158 kubelet[2230]: I0707 01:13:23.062142 2230 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 01:13:23.062290 kubelet[2230]: I0707 01:13:23.062275 2230 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:13:23.062377 kubelet[2230]: I0707 01:13:23.062365 2230 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 01:13:23.062547 kubelet[2230]: E0707 01:13:23.062519 2230 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:13:23.071746 kubelet[2230]: E0707 01:13:23.071712 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 01:13:23.078066 kubelet[2230]: I0707 01:13:23.077948 2230 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:13:23.078225 kubelet[2230]: I0707 01:13:23.078211 2230 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:13:23.078357 kubelet[2230]: I0707 01:13:23.078343 2230 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:13:23.086026 kubelet[2230]: I0707 01:13:23.086007 2230 policy_none.go:49] "None policy: Start" Jul 7 01:13:23.086167 kubelet[2230]: I0707 01:13:23.086152 2230 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:13:23.086305 kubelet[2230]: I0707 01:13:23.086293 2230 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:13:23.094499 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 01:13:23.108756 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 01:13:23.112986 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 01:13:23.121990 kubelet[2230]: E0707 01:13:23.121896 2230 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 01:13:23.122177 kubelet[2230]: I0707 01:13:23.122150 2230 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:13:23.122242 kubelet[2230]: I0707 01:13:23.122178 2230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:13:23.122697 kubelet[2230]: I0707 01:13:23.122672 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:13:23.125298 kubelet[2230]: E0707 01:13:23.125213 2230 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:13:23.125298 kubelet[2230]: E0707 01:13:23.125291 2230 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" Jul 7 01:13:23.189615 systemd[1]: Created slice kubepods-burstable-podf1b00cfc6c85dfe639649a5e83ae72a3.slice - libcontainer container kubepods-burstable-podf1b00cfc6c85dfe639649a5e83ae72a3.slice. Jul 7 01:13:23.208988 kubelet[2230]: E0707 01:13:23.208587 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.219697 systemd[1]: Created slice kubepods-burstable-pod0b2e41f2a0183ea4ce5d03caf464b0f9.slice - libcontainer container kubepods-burstable-pod0b2e41f2a0183ea4ce5d03caf464b0f9.slice. Jul 7 01:13:23.228760 kubelet[2230]: E0707 01:13:23.228297 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.229310 kubelet[2230]: I0707 01:13:23.229230 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.230196 kubelet[2230]: E0707 01:13:23.230115 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.236387 systemd[1]: Created slice kubepods-burstable-pod502e76a70bd5eb6e3bcf0fcb81811131.slice - libcontainer container kubepods-burstable-pod502e76a70bd5eb6e3bcf0fcb81811131.slice. Jul 7 01:13:23.240604 kubelet[2230]: E0707 01:13:23.240514 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.248280 kubelet[2230]: E0707 01:13:23.248187 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="400ms" Jul 7 01:13:23.334469 kubelet[2230]: I0707 01:13:23.334221 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.334469 kubelet[2230]: I0707 01:13:23.334310 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.334469 kubelet[2230]: I0707 01:13:23.334367 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.334469 kubelet[2230]: I0707 01:13:23.334421 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.335091 kubelet[2230]: I0707 01:13:23.334493 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.335091 kubelet[2230]: I0707 01:13:23.334540 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.335091 kubelet[2230]: I0707 01:13:23.334600 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.335091 kubelet[2230]: I0707 01:13:23.334649 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/502e76a70bd5eb6e3bcf0fcb81811131-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"502e76a70bd5eb6e3bcf0fcb81811131\") " pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.335091 kubelet[2230]: I0707 01:13:23.334748 2230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.434562 kubelet[2230]: I0707 01:13:23.434440 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.436699 kubelet[2230]: E0707 01:13:23.436603 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.512313 containerd[1462]: time="2025-07-07T01:13:23.511979305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:f1b00cfc6c85dfe639649a5e83ae72a3,Namespace:kube-system,Attempt:0,}" Jul 7 01:13:23.531199 containerd[1462]: time="2025-07-07T01:13:23.531002794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:0b2e41f2a0183ea4ce5d03caf464b0f9,Namespace:kube-system,Attempt:0,}" Jul 7 01:13:23.542620 containerd[1462]: time="2025-07-07T01:13:23.542289342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:502e76a70bd5eb6e3bcf0fcb81811131,Namespace:kube-system,Attempt:0,}" Jul 7 01:13:23.650030 kubelet[2230]: E0707 01:13:23.649313 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="800ms" Jul 7 01:13:23.842217 kubelet[2230]: I0707 01:13:23.842023 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.845514 kubelet[2230]: E0707 01:13:23.845291 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:23.976841 kubelet[2230]: E0707 01:13:23.976724 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 01:13:24.049750 kubelet[2230]: E0707 01:13:24.049598 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-0-2961e92ed0.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 01:13:24.055681 kubelet[2230]: E0707 01:13:24.055580 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 01:13:24.171238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579245735.mount: Deactivated successfully. Jul 7 01:13:24.182996 containerd[1462]: time="2025-07-07T01:13:24.182813124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:13:24.186629 containerd[1462]: time="2025-07-07T01:13:24.186304673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:13:24.188426 containerd[1462]: time="2025-07-07T01:13:24.188251784Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:13:24.190882 containerd[1462]: time="2025-07-07T01:13:24.190785386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:13:24.194414 containerd[1462]: time="2025-07-07T01:13:24.193162084Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:13:24.197853 containerd[1462]: time="2025-07-07T01:13:24.197772040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 01:13:24.200328 containerd[1462]: time="2025-07-07T01:13:24.200253644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:13:24.202307 containerd[1462]: time="2025-07-07T01:13:24.202242654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:13:24.206952 containerd[1462]: time="2025-07-07T01:13:24.206837352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.216804ms" Jul 7 01:13:24.220778 containerd[1462]: time="2025-07-07T01:13:24.220679603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.122139ms" Jul 7 01:13:24.240316 containerd[1462]: time="2025-07-07T01:13:24.240235219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 697.769256ms" Jul 7 01:13:24.469056 kubelet[2230]: E0707 01:13:24.464573 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="1.6s" Jul 7 01:13:24.469056 kubelet[2230]: E0707 01:13:24.464765 2230 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 01:13:24.529254 containerd[1462]: time="2025-07-07T01:13:24.528755860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:24.529254 containerd[1462]: time="2025-07-07T01:13:24.528837904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:24.529254 containerd[1462]: time="2025-07-07T01:13:24.528899269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.529254 containerd[1462]: time="2025-07-07T01:13:24.528993846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.578710 systemd[1]: Started cri-containerd-ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84.scope - libcontainer container ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84. Jul 7 01:13:24.584205 containerd[1462]: time="2025-07-07T01:13:24.577985590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:24.584205 containerd[1462]: time="2025-07-07T01:13:24.578077613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:24.584205 containerd[1462]: time="2025-07-07T01:13:24.578092731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.584205 containerd[1462]: time="2025-07-07T01:13:24.578216002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.593650 containerd[1462]: time="2025-07-07T01:13:24.593331832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:24.593650 containerd[1462]: time="2025-07-07T01:13:24.593394069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:24.593650 containerd[1462]: time="2025-07-07T01:13:24.593414106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.593650 containerd[1462]: time="2025-07-07T01:13:24.593501049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:24.630060 systemd[1]: Started cri-containerd-cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb.scope - libcontainer container cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb. Jul 7 01:13:24.640825 systemd[1]: Started cri-containerd-d8a1c6d492743ac9e3a24aca0d613e9e0246c2bcf43b8197ae3ba4280445f0c2.scope - libcontainer container d8a1c6d492743ac9e3a24aca0d613e9e0246c2bcf43b8197ae3ba4280445f0c2. Jul 7 01:13:24.648990 kubelet[2230]: I0707 01:13:24.648954 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:24.649340 kubelet[2230]: E0707 01:13:24.649310 2230 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:24.703881 containerd[1462]: time="2025-07-07T01:13:24.703366893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:0b2e41f2a0183ea4ce5d03caf464b0f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\"" Jul 7 01:13:24.719963 containerd[1462]: time="2025-07-07T01:13:24.719005523Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 01:13:24.721670 containerd[1462]: time="2025-07-07T01:13:24.721557008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:f1b00cfc6c85dfe639649a5e83ae72a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8a1c6d492743ac9e3a24aca0d613e9e0246c2bcf43b8197ae3ba4280445f0c2\"" Jul 7 01:13:24.730557 containerd[1462]: time="2025-07-07T01:13:24.730470996Z" level=info msg="CreateContainer within sandbox \"d8a1c6d492743ac9e3a24aca0d613e9e0246c2bcf43b8197ae3ba4280445f0c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 01:13:24.738068 containerd[1462]: time="2025-07-07T01:13:24.737990108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal,Uid:502e76a70bd5eb6e3bcf0fcb81811131,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\"" Jul 7 01:13:24.747014 containerd[1462]: time="2025-07-07T01:13:24.746964289Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 01:13:24.758002 containerd[1462]: time="2025-07-07T01:13:24.757948090Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66\"" Jul 7 01:13:24.760215 containerd[1462]: time="2025-07-07T01:13:24.759039557Z" level=info msg="StartContainer for \"4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66\"" Jul 7 01:13:24.763891 containerd[1462]: time="2025-07-07T01:13:24.763626229Z" level=info msg="CreateContainer within sandbox \"d8a1c6d492743ac9e3a24aca0d613e9e0246c2bcf43b8197ae3ba4280445f0c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16c191eef0dc95c29491cc72a69479dc84b043fd8fc1b915b2486aa3a216ad30\"" Jul 7 01:13:24.765353 containerd[1462]: time="2025-07-07T01:13:24.764279674Z" level=info msg="StartContainer for \"16c191eef0dc95c29491cc72a69479dc84b043fd8fc1b915b2486aa3a216ad30\"" Jul 7 01:13:24.780670 containerd[1462]: time="2025-07-07T01:13:24.780627575Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7\"" Jul 7 01:13:24.781667 containerd[1462]: time="2025-07-07T01:13:24.781632369Z" level=info msg="StartContainer for \"9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7\"" Jul 7 01:13:24.794060 systemd[1]: Started cri-containerd-4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66.scope - libcontainer container 4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66. Jul 7 01:13:24.825008 systemd[1]: Started cri-containerd-16c191eef0dc95c29491cc72a69479dc84b043fd8fc1b915b2486aa3a216ad30.scope - libcontainer container 16c191eef0dc95c29491cc72a69479dc84b043fd8fc1b915b2486aa3a216ad30. Jul 7 01:13:24.835036 systemd[1]: Started cri-containerd-9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7.scope - libcontainer container 9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7. Jul 7 01:13:24.883658 containerd[1462]: time="2025-07-07T01:13:24.883607864Z" level=info msg="StartContainer for \"4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66\" returns successfully" Jul 7 01:13:24.912895 containerd[1462]: time="2025-07-07T01:13:24.912818499Z" level=info msg="StartContainer for \"16c191eef0dc95c29491cc72a69479dc84b043fd8fc1b915b2486aa3a216ad30\" returns successfully" Jul 7 01:13:24.913079 containerd[1462]: time="2025-07-07T01:13:24.912929738Z" level=info msg="StartContainer for \"9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7\" returns successfully" Jul 7 01:13:24.961354 kubelet[2230]: E0707 01:13:24.961302 2230 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.24.4.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 01:13:25.088081 kubelet[2230]: E0707 01:13:25.087483 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:25.092363 kubelet[2230]: E0707 01:13:25.091926 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:25.092830 kubelet[2230]: E0707 01:13:25.092809 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:26.098472 kubelet[2230]: E0707 01:13:26.098198 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:26.098472 kubelet[2230]: E0707 01:13:26.098198 2230 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:26.255711 kubelet[2230]: I0707 01:13:26.255262 2230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.462640 kubelet[2230]: E0707 01:13:27.462507 2230 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-0-2961e92ed0.novalocal\" not found" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.626042 kubelet[2230]: I0707 01:13:27.625960 2230 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.638019 kubelet[2230]: I0707 01:13:27.637956 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.687759 kubelet[2230]: E0707 01:13:27.687699 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.687759 kubelet[2230]: I0707 01:13:27.687754 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.697268 kubelet[2230]: E0707 01:13:27.697225 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.697268 kubelet[2230]: I0707 01:13:27.697254 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.701682 kubelet[2230]: E0707 01:13:27.701629 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:27.984844 kubelet[2230]: I0707 01:13:27.982161 2230 apiserver.go:52] "Watching apiserver" Jul 7 01:13:28.034124 kubelet[2230]: I0707 01:13:28.034050 2230 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:13:28.251113 kubelet[2230]: I0707 01:13:28.249358 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:28.256575 kubelet[2230]: E0707 01:13:28.255616 2230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:29.331410 kubelet[2230]: I0707 01:13:29.331224 2230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:29.351801 kubelet[2230]: I0707 01:13:29.351586 2230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 01:13:30.514944 systemd[1]: Reloading requested from client PID 2511 ('systemctl') (unit session-11.scope)... Jul 7 01:13:30.515651 systemd[1]: Reloading... Jul 7 01:13:30.631921 zram_generator::config[2550]: No configuration found. Jul 7 01:13:30.784357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:13:30.895732 systemd[1]: Reloading finished in 379 ms. Jul 7 01:13:30.940814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:30.959101 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 01:13:30.959380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:30.959504 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 133.9M memory peak, 0B memory swap peak. Jul 7 01:13:30.963215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:13:31.361217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:13:31.369575 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:13:31.493440 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:13:31.493440 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:13:31.493440 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:13:31.493946 kubelet[2614]: I0707 01:13:31.493515 2614 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:13:31.514822 kubelet[2614]: I0707 01:13:31.514781 2614 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 01:13:31.515433 kubelet[2614]: I0707 01:13:31.514895 2614 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:13:31.515433 kubelet[2614]: I0707 01:13:31.515251 2614 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 01:13:31.519741 kubelet[2614]: I0707 01:13:31.519281 2614 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 01:13:31.526353 kubelet[2614]: I0707 01:13:31.526070 2614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:13:31.532505 kubelet[2614]: E0707 01:13:31.532465 2614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:13:31.532505 kubelet[2614]: I0707 01:13:31.532509 2614 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:13:31.542360 kubelet[2614]: I0707 01:13:31.541158 2614 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:13:31.542360 kubelet[2614]: I0707 01:13:31.541502 2614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:13:31.543793 kubelet[2614]: I0707 01:13:31.541552 2614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-0-2961e92ed0.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:13:31.544552 kubelet[2614]: I0707 01:13:31.544527 2614 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.544664 2614 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.544787 2614 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.545063 2614 kubelet.go:480] "Attempting to sync node with API server" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.545085 2614 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.545129 2614 kubelet.go:386] "Adding apiserver pod source" Jul 7 01:13:31.545810 kubelet[2614]: I0707 01:13:31.545152 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:13:31.550037 kubelet[2614]: I0707 01:13:31.550007 2614 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:13:31.550676 kubelet[2614]: I0707 01:13:31.550644 2614 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 01:13:31.561891 kubelet[2614]: I0707 01:13:31.559131 2614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:13:31.561891 kubelet[2614]: I0707 01:13:31.559204 2614 server.go:1289] "Started kubelet" Jul 7 01:13:31.567977 kubelet[2614]: I0707 01:13:31.567940 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:13:31.579717 kubelet[2614]: E0707 01:13:31.579674 2614 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:13:31.580034 kubelet[2614]: I0707 01:13:31.579985 2614 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:13:31.581446 kubelet[2614]: I0707 01:13:31.581403 2614 server.go:317] "Adding debug handlers to kubelet server" Jul 7 01:13:31.591497 kubelet[2614]: I0707 01:13:31.591413 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:13:31.591793 kubelet[2614]: I0707 01:13:31.591767 2614 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:13:31.593001 kubelet[2614]: I0707 01:13:31.592169 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:13:31.601439 kubelet[2614]: I0707 01:13:31.601400 2614 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:13:31.604829 kubelet[2614]: I0707 01:13:31.602854 2614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:13:31.604829 kubelet[2614]: I0707 01:13:31.603008 2614 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:13:31.609591 kubelet[2614]: I0707 01:13:31.609539 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 01:13:31.613662 kubelet[2614]: I0707 01:13:31.613569 2614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 01:13:31.614031 kubelet[2614]: I0707 01:13:31.614013 2614 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 01:13:31.614194 kubelet[2614]: I0707 01:13:31.614174 2614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:13:31.614314 kubelet[2614]: I0707 01:13:31.614301 2614 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 01:13:31.614493 kubelet[2614]: E0707 01:13:31.614444 2614 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:13:31.627916 kubelet[2614]: I0707 01:13:31.625691 2614 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:13:31.642463 kubelet[2614]: I0707 01:13:31.642346 2614 factory.go:223] Registration of the containerd container factory successfully Jul 7 01:13:31.642463 kubelet[2614]: I0707 01:13:31.642378 2614 factory.go:223] Registration of the systemd container factory successfully Jul 7 01:13:31.715998 kubelet[2614]: E0707 01:13:31.715946 2614 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 01:13:31.727362 kubelet[2614]: I0707 01:13:31.727339 2614 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727523 2614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727557 2614 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727725 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727770 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727810 2614 policy_none.go:49] "None policy: Start" Jul 7 01:13:31.727916 kubelet[2614]: I0707 01:13:31.727847 2614 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:13:31.728192 kubelet[2614]: I0707 01:13:31.728178 2614 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:13:31.728480 kubelet[2614]: I0707 01:13:31.728362 2614 state_mem.go:75] "Updated machine memory state" Jul 7 01:13:31.735150 kubelet[2614]: E0707 01:13:31.735114 2614 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 01:13:31.736513 kubelet[2614]: I0707 01:13:31.736055 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:13:31.736513 kubelet[2614]: I0707 01:13:31.736085 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:13:31.736513 kubelet[2614]: I0707 01:13:31.736403 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:13:31.738407 kubelet[2614]: E0707 01:13:31.738372 2614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:13:31.844811 kubelet[2614]: I0707 01:13:31.844731 2614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.875266 kubelet[2614]: I0707 01:13:31.874974 2614 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.876063 kubelet[2614]: I0707 01:13:31.875727 2614 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.919604 kubelet[2614]: I0707 01:13:31.918271 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.919604 kubelet[2614]: I0707 01:13:31.918777 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.920081 kubelet[2614]: I0707 01:13:31.920065 2614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:31.933585 kubelet[2614]: I0707 01:13:31.933536 2614 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 01:13:31.938416 kubelet[2614]: I0707 01:13:31.938281 2614 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 01:13:31.939290 kubelet[2614]: I0707 01:13:31.938849 2614 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 01:13:31.939479 kubelet[2614]: E0707 01:13:31.939404 2614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.004802 kubelet[2614]: I0707 01:13:32.004436 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.004802 kubelet[2614]: I0707 01:13:32.004473 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.004802 kubelet[2614]: I0707 01:13:32.004497 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.004802 kubelet[2614]: I0707 01:13:32.004528 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/502e76a70bd5eb6e3bcf0fcb81811131-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"502e76a70bd5eb6e3bcf0fcb81811131\") " pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.005208 kubelet[2614]: I0707 01:13:32.004554 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.005208 kubelet[2614]: I0707 01:13:32.004579 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1b00cfc6c85dfe639649a5e83ae72a3-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"f1b00cfc6c85dfe639649a5e83ae72a3\") " pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.005208 kubelet[2614]: I0707 01:13:32.004606 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.005208 kubelet[2614]: I0707 01:13:32.004633 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.005431 kubelet[2614]: I0707 01:13:32.004661 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b2e41f2a0183ea4ce5d03caf464b0f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal\" (UID: \"0b2e41f2a0183ea4ce5d03caf464b0f9\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:13:32.548311 kubelet[2614]: I0707 01:13:32.547722 2614 apiserver.go:52] "Watching apiserver" Jul 7 01:13:32.603136 kubelet[2614]: I0707 01:13:32.603043 2614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:13:33.192234 kubelet[2614]: I0707 01:13:33.192058 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" podStartSLOduration=2.19198484 podStartE2EDuration="2.19198484s" podCreationTimestamp="2025-07-07 01:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:13:32.93095673 +0000 UTC m=+1.548697159" watchObservedRunningTime="2025-07-07 01:13:33.19198484 +0000 UTC m=+1.809725320" Jul 7 01:13:33.192820 kubelet[2614]: I0707 01:13:33.192333 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" podStartSLOduration=4.192318576 podStartE2EDuration="4.192318576s" podCreationTimestamp="2025-07-07 01:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:13:33.190984403 +0000 UTC m=+1.808724902" watchObservedRunningTime="2025-07-07 01:13:33.192318576 +0000 UTC m=+1.810059056" Jul 7 01:13:33.300696 kubelet[2614]: I0707 01:13:33.300612 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal" podStartSLOduration=2.30059496 podStartE2EDuration="2.30059496s" podCreationTimestamp="2025-07-07 01:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:13:33.26103923 +0000 UTC m=+1.878779719" watchObservedRunningTime="2025-07-07 01:13:33.30059496 +0000 UTC m=+1.918335389" Jul 7 01:13:36.328544 kubelet[2614]: I0707 01:13:36.327696 2614 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 01:13:36.333851 kubelet[2614]: I0707 01:13:36.332637 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 01:13:36.334082 containerd[1462]: time="2025-07-07T01:13:36.331575466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 01:13:37.286063 systemd[1]: Created slice kubepods-besteffort-podebb25ac8_8c28_4ef9_8ece_080fe26d7ebf.slice - libcontainer container kubepods-besteffort-podebb25ac8_8c28_4ef9_8ece_080fe26d7ebf.slice. Jul 7 01:13:37.305374 kubelet[2614]: I0707 01:13:37.305290 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf-lib-modules\") pod \"kube-proxy-j29bx\" (UID: \"ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf\") " pod="kube-system/kube-proxy-j29bx" Jul 7 01:13:37.305374 kubelet[2614]: I0707 01:13:37.305361 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf-kube-proxy\") pod \"kube-proxy-j29bx\" (UID: \"ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf\") " pod="kube-system/kube-proxy-j29bx" Jul 7 01:13:37.305967 kubelet[2614]: I0707 01:13:37.305383 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2cnf\" (UniqueName: \"kubernetes.io/projected/ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf-kube-api-access-r2cnf\") pod \"kube-proxy-j29bx\" (UID: \"ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf\") " pod="kube-system/kube-proxy-j29bx" Jul 7 01:13:37.305967 kubelet[2614]: I0707 01:13:37.305474 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf-xtables-lock\") pod \"kube-proxy-j29bx\" (UID: \"ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf\") " pod="kube-system/kube-proxy-j29bx" Jul 7 01:13:37.485005 systemd[1]: Created slice kubepods-besteffort-pod2980795e_09f7_4095_957d_e01c74f573a0.slice - libcontainer container kubepods-besteffort-pod2980795e_09f7_4095_957d_e01c74f573a0.slice. Jul 7 01:13:37.507801 kubelet[2614]: I0707 01:13:37.507711 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2980795e-09f7-4095-957d-e01c74f573a0-var-lib-calico\") pod \"tigera-operator-747864d56d-ksmf8\" (UID: \"2980795e-09f7-4095-957d-e01c74f573a0\") " pod="tigera-operator/tigera-operator-747864d56d-ksmf8" Jul 7 01:13:37.507801 kubelet[2614]: I0707 01:13:37.507758 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszmc\" (UniqueName: \"kubernetes.io/projected/2980795e-09f7-4095-957d-e01c74f573a0-kube-api-access-pszmc\") pod \"tigera-operator-747864d56d-ksmf8\" (UID: \"2980795e-09f7-4095-957d-e01c74f573a0\") " pod="tigera-operator/tigera-operator-747864d56d-ksmf8" Jul 7 01:13:37.602075 containerd[1462]: time="2025-07-07T01:13:37.601011978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j29bx,Uid:ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf,Namespace:kube-system,Attempt:0,}" Jul 7 01:13:37.692853 containerd[1462]: time="2025-07-07T01:13:37.692499181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:37.692853 containerd[1462]: time="2025-07-07T01:13:37.692603777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:37.692853 containerd[1462]: time="2025-07-07T01:13:37.692617683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:37.692853 containerd[1462]: time="2025-07-07T01:13:37.692705528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:37.730330 systemd[1]: Started cri-containerd-840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3.scope - libcontainer container 840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3. Jul 7 01:13:37.768933 containerd[1462]: time="2025-07-07T01:13:37.768813949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j29bx,Uid:ebb25ac8-8c28-4ef9-8ece-080fe26d7ebf,Namespace:kube-system,Attempt:0,} returns sandbox id \"840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3\"" Jul 7 01:13:37.779324 containerd[1462]: time="2025-07-07T01:13:37.779174025Z" level=info msg="CreateContainer within sandbox \"840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 01:13:37.795149 containerd[1462]: time="2025-07-07T01:13:37.794768143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ksmf8,Uid:2980795e-09f7-4095-957d-e01c74f573a0,Namespace:tigera-operator,Attempt:0,}" Jul 7 01:13:37.809611 containerd[1462]: time="2025-07-07T01:13:37.809555886Z" level=info msg="CreateContainer within sandbox \"840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f052e8b2c225ac4849d2e88d32258783623fe81f4af4802b9eebf160b2c21a23\"" Jul 7 01:13:37.811226 containerd[1462]: time="2025-07-07T01:13:37.810988905Z" level=info msg="StartContainer for \"f052e8b2c225ac4849d2e88d32258783623fe81f4af4802b9eebf160b2c21a23\"" Jul 7 01:13:37.854397 containerd[1462]: time="2025-07-07T01:13:37.853686592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:37.854397 containerd[1462]: time="2025-07-07T01:13:37.854186470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:37.854397 containerd[1462]: time="2025-07-07T01:13:37.854257704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:37.855311 containerd[1462]: time="2025-07-07T01:13:37.854749987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:37.882796 systemd[1]: Started cri-containerd-f052e8b2c225ac4849d2e88d32258783623fe81f4af4802b9eebf160b2c21a23.scope - libcontainer container f052e8b2c225ac4849d2e88d32258783623fe81f4af4802b9eebf160b2c21a23. Jul 7 01:13:37.903444 systemd[1]: Started cri-containerd-0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b.scope - libcontainer container 0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b. Jul 7 01:13:37.948604 containerd[1462]: time="2025-07-07T01:13:37.948528200Z" level=info msg="StartContainer for \"f052e8b2c225ac4849d2e88d32258783623fe81f4af4802b9eebf160b2c21a23\" returns successfully" Jul 7 01:13:37.980919 containerd[1462]: time="2025-07-07T01:13:37.980848848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ksmf8,Uid:2980795e-09f7-4095-957d-e01c74f573a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\"" Jul 7 01:13:37.984836 containerd[1462]: time="2025-07-07T01:13:37.984528902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 01:13:38.481449 systemd[1]: run-containerd-runc-k8s.io-840291068fda00c3347fc3e95bfe375fbb352383ac18e0f8ff37bfbe1093b2b3-runc.mR5uI2.mount: Deactivated successfully. Jul 7 01:13:38.976744 kubelet[2614]: I0707 01:13:38.976405 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j29bx" podStartSLOduration=1.976269345 podStartE2EDuration="1.976269345s" podCreationTimestamp="2025-07-07 01:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:13:38.975967859 +0000 UTC m=+7.593708399" watchObservedRunningTime="2025-07-07 01:13:38.976269345 +0000 UTC m=+7.594009874" Jul 7 01:13:39.678545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411951593.mount: Deactivated successfully. Jul 7 01:13:41.305931 containerd[1462]: time="2025-07-07T01:13:41.304594931Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:41.332734 containerd[1462]: time="2025-07-07T01:13:41.332579571Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 01:13:41.390938 containerd[1462]: time="2025-07-07T01:13:41.390788941Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:41.450938 containerd[1462]: time="2025-07-07T01:13:41.449818371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:41.456153 containerd[1462]: time="2025-07-07T01:13:41.456057706Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.471334539s" Jul 7 01:13:41.456501 containerd[1462]: time="2025-07-07T01:13:41.456357999Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 01:13:41.542550 containerd[1462]: time="2025-07-07T01:13:41.542427235Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 01:13:41.802759 containerd[1462]: time="2025-07-07T01:13:41.802703483Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\"" Jul 7 01:13:41.805892 containerd[1462]: time="2025-07-07T01:13:41.804240257Z" level=info msg="StartContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\"" Jul 7 01:13:41.882185 systemd[1]: Started cri-containerd-64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11.scope - libcontainer container 64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11. Jul 7 01:13:42.036935 containerd[1462]: time="2025-07-07T01:13:42.036420846Z" level=info msg="StartContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\" returns successfully" Jul 7 01:13:50.282837 sudo[1715]: pam_unix(sudo:session): session closed for user root Jul 7 01:13:50.565382 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 7 01:13:50.574743 systemd[1]: sshd@8-172.24.4.54:22-172.24.4.1:42590.service: Deactivated successfully. Jul 7 01:13:50.578582 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 01:13:50.581307 systemd[1]: session-11.scope: Consumed 9.099s CPU time, 159.5M memory peak, 0B memory swap peak. Jul 7 01:13:50.583241 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jul 7 01:13:50.588523 systemd-logind[1444]: Removed session 11. Jul 7 01:13:54.835005 kubelet[2614]: I0707 01:13:54.834566 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-ksmf8" podStartSLOduration=14.356944791 podStartE2EDuration="17.834530888s" podCreationTimestamp="2025-07-07 01:13:37 +0000 UTC" firstStartedPulling="2025-07-07 01:13:37.983228953 +0000 UTC m=+6.600969382" lastFinishedPulling="2025-07-07 01:13:41.46081499 +0000 UTC m=+10.078555479" observedRunningTime="2025-07-07 01:13:43.050705252 +0000 UTC m=+11.668445731" watchObservedRunningTime="2025-07-07 01:13:54.834530888 +0000 UTC m=+23.452271327" Jul 7 01:13:54.851700 systemd[1]: Created slice kubepods-besteffort-pod430619fb_2d3c_4b9c_b452_44ae35992eb5.slice - libcontainer container kubepods-besteffort-pod430619fb_2d3c_4b9c_b452_44ae35992eb5.slice. Jul 7 01:13:55.010851 kubelet[2614]: I0707 01:13:55.010785 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/430619fb-2d3c-4b9c-b452-44ae35992eb5-typha-certs\") pod \"calico-typha-75c9d4955f-4ncxn\" (UID: \"430619fb-2d3c-4b9c-b452-44ae35992eb5\") " pod="calico-system/calico-typha-75c9d4955f-4ncxn" Jul 7 01:13:55.011391 kubelet[2614]: I0707 01:13:55.011247 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz4m\" (UniqueName: \"kubernetes.io/projected/430619fb-2d3c-4b9c-b452-44ae35992eb5-kube-api-access-5jz4m\") pod \"calico-typha-75c9d4955f-4ncxn\" (UID: \"430619fb-2d3c-4b9c-b452-44ae35992eb5\") " pod="calico-system/calico-typha-75c9d4955f-4ncxn" Jul 7 01:13:55.011563 kubelet[2614]: I0707 01:13:55.011420 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/430619fb-2d3c-4b9c-b452-44ae35992eb5-tigera-ca-bundle\") pod \"calico-typha-75c9d4955f-4ncxn\" (UID: \"430619fb-2d3c-4b9c-b452-44ae35992eb5\") " pod="calico-system/calico-typha-75c9d4955f-4ncxn" Jul 7 01:13:55.159112 containerd[1462]: time="2025-07-07T01:13:55.157806570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c9d4955f-4ncxn,Uid:430619fb-2d3c-4b9c-b452-44ae35992eb5,Namespace:calico-system,Attempt:0,}" Jul 7 01:13:55.228706 containerd[1462]: time="2025-07-07T01:13:55.227120301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:55.228706 containerd[1462]: time="2025-07-07T01:13:55.228636595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:55.228989 containerd[1462]: time="2025-07-07T01:13:55.228652425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:55.228989 containerd[1462]: time="2025-07-07T01:13:55.228975210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:55.255216 systemd[1]: Created slice kubepods-besteffort-pod641426c8_2859_4e26_9846_ca898aca89df.slice - libcontainer container kubepods-besteffort-pod641426c8_2859_4e26_9846_ca898aca89df.slice. Jul 7 01:13:55.284157 systemd[1]: Started cri-containerd-0eb5153306fc2b895913e79fcdb7a8e1007fab34a30135a217ca5a83fe9e8fcc.scope - libcontainer container 0eb5153306fc2b895913e79fcdb7a8e1007fab34a30135a217ca5a83fe9e8fcc. Jul 7 01:13:55.313991 kubelet[2614]: I0707 01:13:55.313940 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-cni-net-dir\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314126 kubelet[2614]: I0707 01:13:55.313991 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-flexvol-driver-host\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314126 kubelet[2614]: I0707 01:13:55.314046 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6bl\" (UniqueName: \"kubernetes.io/projected/641426c8-2859-4e26-9846-ca898aca89df-kube-api-access-5z6bl\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314126 kubelet[2614]: I0707 01:13:55.314077 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-lib-modules\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314126 kubelet[2614]: I0707 01:13:55.314102 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-var-run-calico\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314126 kubelet[2614]: I0707 01:13:55.314122 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-xtables-lock\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314397 kubelet[2614]: I0707 01:13:55.314141 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/641426c8-2859-4e26-9846-ca898aca89df-tigera-ca-bundle\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314397 kubelet[2614]: I0707 01:13:55.314182 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-cni-bin-dir\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314397 kubelet[2614]: I0707 01:13:55.314209 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-cni-log-dir\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314397 kubelet[2614]: I0707 01:13:55.314228 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/641426c8-2859-4e26-9846-ca898aca89df-node-certs\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314397 kubelet[2614]: I0707 01:13:55.314271 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-policysync\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.314629 kubelet[2614]: I0707 01:13:55.314293 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/641426c8-2859-4e26-9846-ca898aca89df-var-lib-calico\") pod \"calico-node-g49jb\" (UID: \"641426c8-2859-4e26-9846-ca898aca89df\") " pod="calico-system/calico-node-g49jb" Jul 7 01:13:55.367222 containerd[1462]: time="2025-07-07T01:13:55.367071588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c9d4955f-4ncxn,Uid:430619fb-2d3c-4b9c-b452-44ae35992eb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"0eb5153306fc2b895913e79fcdb7a8e1007fab34a30135a217ca5a83fe9e8fcc\"" Jul 7 01:13:55.374342 containerd[1462]: time="2025-07-07T01:13:55.373978925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 01:13:55.417992 kubelet[2614]: E0707 01:13:55.416441 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.417992 kubelet[2614]: W0707 01:13:55.416480 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.417992 kubelet[2614]: E0707 01:13:55.416542 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.418767 kubelet[2614]: E0707 01:13:55.418733 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.418767 kubelet[2614]: W0707 01:13:55.418754 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.418993 kubelet[2614]: E0707 01:13:55.418773 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.420099 kubelet[2614]: E0707 01:13:55.420077 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.420099 kubelet[2614]: W0707 01:13:55.420095 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.420308 kubelet[2614]: E0707 01:13:55.420110 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.420442 kubelet[2614]: E0707 01:13:55.420421 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.420442 kubelet[2614]: W0707 01:13:55.420439 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.420451 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.420745 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.422669 kubelet[2614]: W0707 01:13:55.420756 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.420768 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.420932 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.422669 kubelet[2614]: W0707 01:13:55.420942 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.420953 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.421090 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.422669 kubelet[2614]: W0707 01:13:55.421100 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.422669 kubelet[2614]: E0707 01:13:55.421110 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421295 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423083 kubelet[2614]: W0707 01:13:55.421306 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421318 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421544 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423083 kubelet[2614]: W0707 01:13:55.421558 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421569 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421787 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423083 kubelet[2614]: W0707 01:13:55.421798 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.421808 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423083 kubelet[2614]: E0707 01:13:55.422015 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423415 kubelet[2614]: W0707 01:13:55.422025 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422035 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422241 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423415 kubelet[2614]: W0707 01:13:55.422251 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422261 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422406 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423415 kubelet[2614]: W0707 01:13:55.422416 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422426 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423415 kubelet[2614]: E0707 01:13:55.422595 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423415 kubelet[2614]: W0707 01:13:55.422604 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.422615 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.422770 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423713 kubelet[2614]: W0707 01:13:55.422780 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.422789 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.423351 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423713 kubelet[2614]: W0707 01:13:55.423363 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.423374 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.423531 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.423713 kubelet[2614]: W0707 01:13:55.423541 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.423713 kubelet[2614]: E0707 01:13:55.423551 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.423795 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.426323 kubelet[2614]: W0707 01:13:55.423807 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.423836 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.424074 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.426323 kubelet[2614]: W0707 01:13:55.424085 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.424096 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.424252 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.426323 kubelet[2614]: W0707 01:13:55.424262 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.424272 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.426323 kubelet[2614]: E0707 01:13:55.424445 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.426662 kubelet[2614]: W0707 01:13:55.424455 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.426662 kubelet[2614]: E0707 01:13:55.424465 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.427958 kubelet[2614]: E0707 01:13:55.427721 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.427958 kubelet[2614]: W0707 01:13:55.427750 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.427958 kubelet[2614]: E0707 01:13:55.427775 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.428470 kubelet[2614]: E0707 01:13:55.428456 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.429581 kubelet[2614]: W0707 01:13:55.428538 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.429581 kubelet[2614]: E0707 01:13:55.428556 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.431073 kubelet[2614]: E0707 01:13:55.431025 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.431316 kubelet[2614]: W0707 01:13:55.431247 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.431797 kubelet[2614]: E0707 01:13:55.431596 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.432982 kubelet[2614]: E0707 01:13:55.432950 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.432982 kubelet[2614]: W0707 01:13:55.432976 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.433168 kubelet[2614]: E0707 01:13:55.432998 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.433226 kubelet[2614]: E0707 01:13:55.433205 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.433226 kubelet[2614]: W0707 01:13:55.433221 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.433342 kubelet[2614]: E0707 01:13:55.433232 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.433488 kubelet[2614]: E0707 01:13:55.433462 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.433488 kubelet[2614]: W0707 01:13:55.433475 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.433488 kubelet[2614]: E0707 01:13:55.433485 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.433818 kubelet[2614]: E0707 01:13:55.433673 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.433818 kubelet[2614]: W0707 01:13:55.433683 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.433818 kubelet[2614]: E0707 01:13:55.433692 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.433871 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.434381 kubelet[2614]: W0707 01:13:55.433881 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.433892 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.434063 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.434381 kubelet[2614]: W0707 01:13:55.434073 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.434082 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.434226 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.434381 kubelet[2614]: W0707 01:13:55.434235 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.434381 kubelet[2614]: E0707 01:13:55.434245 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.434398 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.436150 kubelet[2614]: W0707 01:13:55.434408 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.434430 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.434956 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.436150 kubelet[2614]: W0707 01:13:55.434967 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.434978 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.435239 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.436150 kubelet[2614]: W0707 01:13:55.435249 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.436150 kubelet[2614]: E0707 01:13:55.435260 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.436428 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.438065 kubelet[2614]: W0707 01:13:55.436666 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.436691 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.437028 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.438065 kubelet[2614]: W0707 01:13:55.437043 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.437055 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.437253 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.438065 kubelet[2614]: W0707 01:13:55.437264 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.437276 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.438065 kubelet[2614]: E0707 01:13:55.437493 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.438435 kubelet[2614]: W0707 01:13:55.437504 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.438435 kubelet[2614]: E0707 01:13:55.437513 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.439201 kubelet[2614]: E0707 01:13:55.438826 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.439201 kubelet[2614]: W0707 01:13:55.438910 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.439201 kubelet[2614]: E0707 01:13:55.438952 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.456094 kubelet[2614]: E0707 01:13:55.455251 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:13:55.461556 kubelet[2614]: E0707 01:13:55.461528 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.462018 kubelet[2614]: W0707 01:13:55.461850 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.462018 kubelet[2614]: E0707 01:13:55.461888 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.515648 kubelet[2614]: E0707 01:13:55.515130 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.515648 kubelet[2614]: W0707 01:13:55.515153 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.515648 kubelet[2614]: E0707 01:13:55.515195 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.519945 kubelet[2614]: E0707 01:13:55.519478 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.519945 kubelet[2614]: W0707 01:13:55.519799 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.519945 kubelet[2614]: E0707 01:13:55.519823 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.523236 kubelet[2614]: E0707 01:13:55.523027 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.523236 kubelet[2614]: W0707 01:13:55.523072 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.523236 kubelet[2614]: E0707 01:13:55.523094 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.523739 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.525917 kubelet[2614]: W0707 01:13:55.523753 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.523779 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.525445 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.525917 kubelet[2614]: W0707 01:13:55.525463 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.525507 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.525745 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.525917 kubelet[2614]: W0707 01:13:55.525756 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.525917 kubelet[2614]: E0707 01:13:55.525767 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.526275 kubelet[2614]: E0707 01:13:55.525993 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.526275 kubelet[2614]: W0707 01:13:55.526004 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.526275 kubelet[2614]: E0707 01:13:55.526015 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.526275 kubelet[2614]: E0707 01:13:55.526227 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.526275 kubelet[2614]: W0707 01:13:55.526238 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.526275 kubelet[2614]: E0707 01:13:55.526250 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.527577 kubelet[2614]: E0707 01:13:55.527557 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.527826 kubelet[2614]: W0707 01:13:55.527807 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.528059 kubelet[2614]: E0707 01:13:55.528039 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.530146 kubelet[2614]: E0707 01:13:55.530039 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.530485 kubelet[2614]: W0707 01:13:55.530240 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.530485 kubelet[2614]: E0707 01:13:55.530264 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.530804 kubelet[2614]: E0707 01:13:55.530701 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.531180 kubelet[2614]: W0707 01:13:55.530995 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.531180 kubelet[2614]: E0707 01:13:55.531014 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.531653 kubelet[2614]: E0707 01:13:55.531550 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.532146 kubelet[2614]: W0707 01:13:55.532089 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.532195 kubelet[2614]: E0707 01:13:55.532147 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.532873 kubelet[2614]: E0707 01:13:55.532675 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.532873 kubelet[2614]: W0707 01:13:55.532692 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.532873 kubelet[2614]: E0707 01:13:55.532704 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.532873 kubelet[2614]: E0707 01:13:55.532849 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533324 kubelet[2614]: W0707 01:13:55.532883 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533324 kubelet[2614]: E0707 01:13:55.532895 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.533324 kubelet[2614]: E0707 01:13:55.533032 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533324 kubelet[2614]: W0707 01:13:55.533041 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533324 kubelet[2614]: E0707 01:13:55.533050 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.533324 kubelet[2614]: E0707 01:13:55.533186 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533324 kubelet[2614]: W0707 01:13:55.533196 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533324 kubelet[2614]: E0707 01:13:55.533204 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.533538 kubelet[2614]: E0707 01:13:55.533356 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533538 kubelet[2614]: W0707 01:13:55.533365 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533538 kubelet[2614]: E0707 01:13:55.533375 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.533538 kubelet[2614]: E0707 01:13:55.533513 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533538 kubelet[2614]: W0707 01:13:55.533522 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533538 kubelet[2614]: E0707 01:13:55.533531 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.533699 kubelet[2614]: E0707 01:13:55.533683 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.533699 kubelet[2614]: W0707 01:13:55.533692 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.533751 kubelet[2614]: E0707 01:13:55.533701 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535276 kubelet[2614]: E0707 01:13:55.533848 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535276 kubelet[2614]: W0707 01:13:55.533883 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535276 kubelet[2614]: E0707 01:13:55.533894 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535276 kubelet[2614]: E0707 01:13:55.534121 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535276 kubelet[2614]: W0707 01:13:55.534131 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535276 kubelet[2614]: E0707 01:13:55.534140 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535276 kubelet[2614]: I0707 01:13:55.534163 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b5c20f3-010e-455a-af88-ed3ca60a5bc4-registration-dir\") pod \"csi-node-driver-bz688\" (UID: \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\") " pod="calico-system/csi-node-driver-bz688" Jul 7 01:13:55.535276 kubelet[2614]: E0707 01:13:55.534321 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535276 kubelet[2614]: W0707 01:13:55.534331 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535620 kubelet[2614]: E0707 01:13:55.534340 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535620 kubelet[2614]: I0707 01:13:55.534358 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b5c20f3-010e-455a-af88-ed3ca60a5bc4-varrun\") pod \"csi-node-driver-bz688\" (UID: \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\") " pod="calico-system/csi-node-driver-bz688" Jul 7 01:13:55.535620 kubelet[2614]: E0707 01:13:55.534516 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535620 kubelet[2614]: W0707 01:13:55.534527 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535620 kubelet[2614]: E0707 01:13:55.534537 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535620 kubelet[2614]: I0707 01:13:55.534556 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqn4\" (UniqueName: \"kubernetes.io/projected/9b5c20f3-010e-455a-af88-ed3ca60a5bc4-kube-api-access-nbqn4\") pod \"csi-node-driver-bz688\" (UID: \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\") " pod="calico-system/csi-node-driver-bz688" Jul 7 01:13:55.535620 kubelet[2614]: E0707 01:13:55.534710 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535620 kubelet[2614]: W0707 01:13:55.534722 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535852 kubelet[2614]: E0707 01:13:55.534731 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535852 kubelet[2614]: I0707 01:13:55.534748 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b5c20f3-010e-455a-af88-ed3ca60a5bc4-socket-dir\") pod \"csi-node-driver-bz688\" (UID: \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\") " pod="calico-system/csi-node-driver-bz688" Jul 7 01:13:55.535852 kubelet[2614]: E0707 01:13:55.534921 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535852 kubelet[2614]: W0707 01:13:55.534934 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535852 kubelet[2614]: E0707 01:13:55.534946 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.535852 kubelet[2614]: I0707 01:13:55.534963 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5c20f3-010e-455a-af88-ed3ca60a5bc4-kubelet-dir\") pod \"csi-node-driver-bz688\" (UID: \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\") " pod="calico-system/csi-node-driver-bz688" Jul 7 01:13:55.535852 kubelet[2614]: E0707 01:13:55.535154 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.535852 kubelet[2614]: W0707 01:13:55.535165 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.535852 kubelet[2614]: E0707 01:13:55.535175 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535327 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537079 kubelet[2614]: W0707 01:13:55.535337 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535346 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535500 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537079 kubelet[2614]: W0707 01:13:55.535510 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535519 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535663 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537079 kubelet[2614]: W0707 01:13:55.535672 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535681 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537079 kubelet[2614]: E0707 01:13:55.535841 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537611 kubelet[2614]: W0707 01:13:55.535851 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.535912 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.536085 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537611 kubelet[2614]: W0707 01:13:55.536095 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.536104 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.536262 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537611 kubelet[2614]: W0707 01:13:55.536271 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.536281 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537611 kubelet[2614]: E0707 01:13:55.536444 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537611 kubelet[2614]: W0707 01:13:55.536454 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537951 kubelet[2614]: E0707 01:13:55.536463 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537951 kubelet[2614]: E0707 01:13:55.536610 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537951 kubelet[2614]: W0707 01:13:55.536620 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537951 kubelet[2614]: E0707 01:13:55.536629 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.537951 kubelet[2614]: E0707 01:13:55.536813 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.537951 kubelet[2614]: W0707 01:13:55.536823 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.537951 kubelet[2614]: E0707 01:13:55.536832 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.563964 containerd[1462]: time="2025-07-07T01:13:55.563913515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g49jb,Uid:641426c8-2859-4e26-9846-ca898aca89df,Namespace:calico-system,Attempt:0,}" Jul 7 01:13:55.623338 containerd[1462]: time="2025-07-07T01:13:55.623196493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:13:55.623798 containerd[1462]: time="2025-07-07T01:13:55.623261675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:13:55.623798 containerd[1462]: time="2025-07-07T01:13:55.623686773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:55.624640 containerd[1462]: time="2025-07-07T01:13:55.624572354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:13:55.636177 kubelet[2614]: E0707 01:13:55.636147 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.636535 kubelet[2614]: W0707 01:13:55.636320 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.636535 kubelet[2614]: E0707 01:13:55.636346 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.638229 kubelet[2614]: E0707 01:13:55.637945 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.638229 kubelet[2614]: W0707 01:13:55.637972 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.638229 kubelet[2614]: E0707 01:13:55.637993 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.639144 kubelet[2614]: E0707 01:13:55.638930 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.639144 kubelet[2614]: W0707 01:13:55.638967 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.639144 kubelet[2614]: E0707 01:13:55.638982 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.640196 kubelet[2614]: E0707 01:13:55.639780 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.640196 kubelet[2614]: W0707 01:13:55.639791 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.640196 kubelet[2614]: E0707 01:13:55.639803 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.641009 kubelet[2614]: E0707 01:13:55.640988 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.641009 kubelet[2614]: W0707 01:13:55.641007 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.641325 kubelet[2614]: E0707 01:13:55.641019 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.641325 kubelet[2614]: E0707 01:13:55.641193 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.641325 kubelet[2614]: W0707 01:13:55.641203 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.641325 kubelet[2614]: E0707 01:13:55.641212 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.642217 kubelet[2614]: E0707 01:13:55.642196 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.642217 kubelet[2614]: W0707 01:13:55.642213 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.642514 kubelet[2614]: E0707 01:13:55.642224 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.643992 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.645293 kubelet[2614]: W0707 01:13:55.644008 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644028 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644308 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.645293 kubelet[2614]: W0707 01:13:55.644318 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644329 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644523 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.645293 kubelet[2614]: W0707 01:13:55.644534 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644543 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.645293 kubelet[2614]: E0707 01:13:55.644851 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.645684 kubelet[2614]: W0707 01:13:55.644888 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.645684 kubelet[2614]: E0707 01:13:55.644899 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.646056 kubelet[2614]: E0707 01:13:55.646038 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.646056 kubelet[2614]: W0707 01:13:55.646052 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.646204 kubelet[2614]: E0707 01:13:55.646063 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.647020 kubelet[2614]: E0707 01:13:55.646999 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.647020 kubelet[2614]: W0707 01:13:55.647016 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.647224 kubelet[2614]: E0707 01:13:55.647028 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.649422 kubelet[2614]: E0707 01:13:55.649213 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.649422 kubelet[2614]: W0707 01:13:55.649229 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.649422 kubelet[2614]: E0707 01:13:55.649242 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.649422 kubelet[2614]: E0707 01:13:55.649418 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.649422 kubelet[2614]: W0707 01:13:55.649429 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.650042 kubelet[2614]: E0707 01:13:55.649439 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.650042 kubelet[2614]: E0707 01:13:55.649675 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.650042 kubelet[2614]: W0707 01:13:55.649686 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.650042 kubelet[2614]: E0707 01:13:55.649697 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.650464 kubelet[2614]: E0707 01:13:55.650448 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.650464 kubelet[2614]: W0707 01:13:55.650462 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.650464 kubelet[2614]: E0707 01:13:55.650475 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.652148 kubelet[2614]: E0707 01:13:55.652025 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.652148 kubelet[2614]: W0707 01:13:55.652039 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.652148 kubelet[2614]: E0707 01:13:55.652050 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.653433 kubelet[2614]: E0707 01:13:55.653411 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.653433 kubelet[2614]: W0707 01:13:55.653424 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.653433 kubelet[2614]: E0707 01:13:55.653435 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.655493 kubelet[2614]: E0707 01:13:55.655474 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.655493 kubelet[2614]: W0707 01:13:55.655491 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.655632 kubelet[2614]: E0707 01:13:55.655502 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.656948 kubelet[2614]: E0707 01:13:55.656920 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.656948 kubelet[2614]: W0707 01:13:55.656939 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.657031 kubelet[2614]: E0707 01:13:55.656950 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.657341 kubelet[2614]: E0707 01:13:55.657313 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.657341 kubelet[2614]: W0707 01:13:55.657332 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.657341 kubelet[2614]: E0707 01:13:55.657343 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.657564 kubelet[2614]: E0707 01:13:55.657546 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.657564 kubelet[2614]: W0707 01:13:55.657563 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.657664 kubelet[2614]: E0707 01:13:55.657573 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.658059 systemd[1]: Started cri-containerd-1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119.scope - libcontainer container 1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119. Jul 7 01:13:55.658541 kubelet[2614]: E0707 01:13:55.658310 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.658541 kubelet[2614]: W0707 01:13:55.658320 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.658541 kubelet[2614]: E0707 01:13:55.658331 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.660767 kubelet[2614]: E0707 01:13:55.660555 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.660767 kubelet[2614]: W0707 01:13:55.660572 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.660767 kubelet[2614]: E0707 01:13:55.660585 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.688041 kubelet[2614]: E0707 01:13:55.687950 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:13:55.688248 kubelet[2614]: W0707 01:13:55.688185 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:13:55.688248 kubelet[2614]: E0707 01:13:55.688213 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:13:55.767974 containerd[1462]: time="2025-07-07T01:13:55.767918398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g49jb,Uid:641426c8-2859-4e26-9846-ca898aca89df,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\"" Jul 7 01:13:57.618888 kubelet[2614]: E0707 01:13:57.617545 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:13:57.723992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41688580.mount: Deactivated successfully. Jul 7 01:13:59.615279 kubelet[2614]: E0707 01:13:59.615217 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:13:59.630382 containerd[1462]: time="2025-07-07T01:13:59.630299295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:59.633648 containerd[1462]: time="2025-07-07T01:13:59.632740974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 01:13:59.634390 containerd[1462]: time="2025-07-07T01:13:59.634347037Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:59.638047 containerd[1462]: time="2025-07-07T01:13:59.638015919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:13:59.639647 containerd[1462]: time="2025-07-07T01:13:59.639603566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.265565511s" Jul 7 01:13:59.639647 containerd[1462]: time="2025-07-07T01:13:59.639638622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 01:13:59.642535 containerd[1462]: time="2025-07-07T01:13:59.640798668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 01:13:59.672367 containerd[1462]: time="2025-07-07T01:13:59.672327906Z" level=info msg="CreateContainer within sandbox \"0eb5153306fc2b895913e79fcdb7a8e1007fab34a30135a217ca5a83fe9e8fcc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 01:13:59.694541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145795532.mount: Deactivated successfully. Jul 7 01:13:59.696671 containerd[1462]: time="2025-07-07T01:13:59.696515173Z" level=info msg="CreateContainer within sandbox \"0eb5153306fc2b895913e79fcdb7a8e1007fab34a30135a217ca5a83fe9e8fcc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"34c392b5475071a885281be71caab8a5750a5966bd7db1cbc2730039b12a24ed\"" Jul 7 01:13:59.697409 containerd[1462]: time="2025-07-07T01:13:59.697332105Z" level=info msg="StartContainer for \"34c392b5475071a885281be71caab8a5750a5966bd7db1cbc2730039b12a24ed\"" Jul 7 01:13:59.750065 systemd[1]: Started cri-containerd-34c392b5475071a885281be71caab8a5750a5966bd7db1cbc2730039b12a24ed.scope - libcontainer container 34c392b5475071a885281be71caab8a5750a5966bd7db1cbc2730039b12a24ed. Jul 7 01:13:59.806789 containerd[1462]: time="2025-07-07T01:13:59.806738714Z" level=info msg="StartContainer for \"34c392b5475071a885281be71caab8a5750a5966bd7db1cbc2730039b12a24ed\" returns successfully" Jul 7 01:14:00.166356 kubelet[2614]: I0707 01:14:00.166247 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c9d4955f-4ncxn" podStartSLOduration=1.8967674639999998 podStartE2EDuration="6.166229193s" podCreationTimestamp="2025-07-07 01:13:54 +0000 UTC" firstStartedPulling="2025-07-07 01:13:55.371177179 +0000 UTC m=+23.988917618" lastFinishedPulling="2025-07-07 01:13:59.640638918 +0000 UTC m=+28.258379347" observedRunningTime="2025-07-07 01:14:00.165648264 +0000 UTC m=+28.783388723" watchObservedRunningTime="2025-07-07 01:14:00.166229193 +0000 UTC m=+28.783969632" Jul 7 01:14:00.174160 kubelet[2614]: E0707 01:14:00.174098 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.174160 kubelet[2614]: W0707 01:14:00.174129 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.174160 kubelet[2614]: E0707 01:14:00.174153 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.174581 kubelet[2614]: E0707 01:14:00.174434 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.174581 kubelet[2614]: W0707 01:14:00.174447 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.174581 kubelet[2614]: E0707 01:14:00.174461 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.174923 kubelet[2614]: E0707 01:14:00.174650 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.174923 kubelet[2614]: W0707 01:14:00.174662 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.174923 kubelet[2614]: E0707 01:14:00.174673 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.175111 kubelet[2614]: E0707 01:14:00.174943 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.175111 kubelet[2614]: W0707 01:14:00.174956 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.175111 kubelet[2614]: E0707 01:14:00.174977 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.175342 kubelet[2614]: E0707 01:14:00.175222 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.175342 kubelet[2614]: W0707 01:14:00.175239 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.175342 kubelet[2614]: E0707 01:14:00.175251 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.175491 kubelet[2614]: E0707 01:14:00.175450 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.175491 kubelet[2614]: W0707 01:14:00.175461 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.175491 kubelet[2614]: E0707 01:14:00.175473 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.175651 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.176982 kubelet[2614]: W0707 01:14:00.175667 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.175678 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.175879 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.176982 kubelet[2614]: W0707 01:14:00.175906 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.175918 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.176161 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.176982 kubelet[2614]: W0707 01:14:00.176172 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.176182 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.176982 kubelet[2614]: E0707 01:14:00.176351 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177359 kubelet[2614]: W0707 01:14:00.176362 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.176371 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.176577 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177359 kubelet[2614]: W0707 01:14:00.176588 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.176598 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.176760 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177359 kubelet[2614]: W0707 01:14:00.176770 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.176780 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.177359 kubelet[2614]: E0707 01:14:00.177021 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177359 kubelet[2614]: W0707 01:14:00.177032 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177701 kubelet[2614]: E0707 01:14:00.177043 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.177701 kubelet[2614]: E0707 01:14:00.177246 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177701 kubelet[2614]: W0707 01:14:00.177257 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177701 kubelet[2614]: E0707 01:14:00.177267 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.177701 kubelet[2614]: E0707 01:14:00.177482 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.177701 kubelet[2614]: W0707 01:14:00.177493 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.177701 kubelet[2614]: E0707 01:14:00.177503 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.256890 kubelet[2614]: E0707 01:14:00.256806 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.256890 kubelet[2614]: W0707 01:14:00.256834 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.257275 kubelet[2614]: E0707 01:14:00.257108 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.257587 kubelet[2614]: E0707 01:14:00.257483 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.257587 kubelet[2614]: W0707 01:14:00.257497 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.257587 kubelet[2614]: E0707 01:14:00.257540 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.258723 kubelet[2614]: E0707 01:14:00.258691 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.258830 kubelet[2614]: W0707 01:14:00.258721 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.258830 kubelet[2614]: E0707 01:14:00.258756 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.258975 kubelet[2614]: E0707 01:14:00.258954 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.258975 kubelet[2614]: W0707 01:14:00.258967 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.258978 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259158 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.259824 kubelet[2614]: W0707 01:14:00.259168 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259177 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259463 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.259824 kubelet[2614]: W0707 01:14:00.259474 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259484 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259657 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.259824 kubelet[2614]: W0707 01:14:00.259666 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.259824 kubelet[2614]: E0707 01:14:00.259676 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.260289 kubelet[2614]: E0707 01:14:00.260022 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.260289 kubelet[2614]: W0707 01:14:00.260034 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.260289 kubelet[2614]: E0707 01:14:00.260045 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.260666 kubelet[2614]: E0707 01:14:00.260646 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.260666 kubelet[2614]: W0707 01:14:00.260663 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.260810 kubelet[2614]: E0707 01:14:00.260674 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.260904 kubelet[2614]: E0707 01:14:00.260884 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.260904 kubelet[2614]: W0707 01:14:00.260898 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.261029 kubelet[2614]: E0707 01:14:00.260908 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.262038 kubelet[2614]: E0707 01:14:00.262017 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.262038 kubelet[2614]: W0707 01:14:00.262034 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.262146 kubelet[2614]: E0707 01:14:00.262045 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.262225 kubelet[2614]: E0707 01:14:00.262201 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.262225 kubelet[2614]: W0707 01:14:00.262211 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.262225 kubelet[2614]: E0707 01:14:00.262220 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.262451 kubelet[2614]: E0707 01:14:00.262432 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.262451 kubelet[2614]: W0707 01:14:00.262445 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.262754 kubelet[2614]: E0707 01:14:00.262454 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.262754 kubelet[2614]: E0707 01:14:00.262659 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.262754 kubelet[2614]: W0707 01:14:00.262669 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.262754 kubelet[2614]: E0707 01:14:00.262678 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.262978 kubelet[2614]: E0707 01:14:00.262921 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.262978 kubelet[2614]: W0707 01:14:00.262931 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.262978 kubelet[2614]: E0707 01:14:00.262941 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.263324 kubelet[2614]: E0707 01:14:00.263304 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.263324 kubelet[2614]: W0707 01:14:00.263320 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.263607 kubelet[2614]: E0707 01:14:00.263329 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.264985 kubelet[2614]: E0707 01:14:00.264949 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.264985 kubelet[2614]: W0707 01:14:00.264965 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.264985 kubelet[2614]: E0707 01:14:00.264976 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:00.265887 kubelet[2614]: E0707 01:14:00.265848 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:00.265887 kubelet[2614]: W0707 01:14:00.265877 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:00.265954 kubelet[2614]: E0707 01:14:00.265898 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.084968 kubelet[2614]: I0707 01:14:01.083535 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:14:01.087519 kubelet[2614]: E0707 01:14:01.086761 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.087519 kubelet[2614]: W0707 01:14:01.086801 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.087519 kubelet[2614]: E0707 01:14:01.086840 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.087519 kubelet[2614]: E0707 01:14:01.087373 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.087519 kubelet[2614]: W0707 01:14:01.087399 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.087519 kubelet[2614]: E0707 01:14:01.087424 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.088913 kubelet[2614]: E0707 01:14:01.088553 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.088913 kubelet[2614]: W0707 01:14:01.088586 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.088913 kubelet[2614]: E0707 01:14:01.088621 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.089783 kubelet[2614]: E0707 01:14:01.089255 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.089783 kubelet[2614]: W0707 01:14:01.089283 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.089783 kubelet[2614]: E0707 01:14:01.089309 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.090442 kubelet[2614]: E0707 01:14:01.090190 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.090442 kubelet[2614]: W0707 01:14:01.090222 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.090442 kubelet[2614]: E0707 01:14:01.090249 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.091178 kubelet[2614]: E0707 01:14:01.090926 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.091178 kubelet[2614]: W0707 01:14:01.090960 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.091178 kubelet[2614]: E0707 01:14:01.090985 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.091627 kubelet[2614]: E0707 01:14:01.091594 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.092125 kubelet[2614]: W0707 01:14:01.091813 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.092125 kubelet[2614]: E0707 01:14:01.091856 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.092473 kubelet[2614]: E0707 01:14:01.092441 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.092639 kubelet[2614]: W0707 01:14:01.092611 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.092814 kubelet[2614]: E0707 01:14:01.092783 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.093749 kubelet[2614]: E0707 01:14:01.093435 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.093749 kubelet[2614]: W0707 01:14:01.093500 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.093749 kubelet[2614]: E0707 01:14:01.093527 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.094984 kubelet[2614]: E0707 01:14:01.094680 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.094984 kubelet[2614]: W0707 01:14:01.094713 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.094984 kubelet[2614]: E0707 01:14:01.094739 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.095480 kubelet[2614]: E0707 01:14:01.095446 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.095647 kubelet[2614]: W0707 01:14:01.095618 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.096088 kubelet[2614]: E0707 01:14:01.095827 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.096420 kubelet[2614]: E0707 01:14:01.096365 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.096420 kubelet[2614]: W0707 01:14:01.096402 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.096640 kubelet[2614]: E0707 01:14:01.096429 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.096851 kubelet[2614]: E0707 01:14:01.096814 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.096851 kubelet[2614]: W0707 01:14:01.096845 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.097123 kubelet[2614]: E0707 01:14:01.096919 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.097361 kubelet[2614]: E0707 01:14:01.097302 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.097361 kubelet[2614]: W0707 01:14:01.097336 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.097361 kubelet[2614]: E0707 01:14:01.097359 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.097777 kubelet[2614]: E0707 01:14:01.097757 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.097960 kubelet[2614]: W0707 01:14:01.097781 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.097960 kubelet[2614]: E0707 01:14:01.097804 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.169214 kubelet[2614]: E0707 01:14:01.169041 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.169214 kubelet[2614]: W0707 01:14:01.169089 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.169214 kubelet[2614]: E0707 01:14:01.169130 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.170790 kubelet[2614]: E0707 01:14:01.169768 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.170790 kubelet[2614]: W0707 01:14:01.169795 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.170790 kubelet[2614]: E0707 01:14:01.169820 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.170790 kubelet[2614]: E0707 01:14:01.170344 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.170790 kubelet[2614]: W0707 01:14:01.170370 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.170790 kubelet[2614]: E0707 01:14:01.170396 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.170844 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.173797 kubelet[2614]: W0707 01:14:01.170908 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.170935 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.171342 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.173797 kubelet[2614]: W0707 01:14:01.171367 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.171390 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.171851 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.173797 kubelet[2614]: W0707 01:14:01.171926 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.171953 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.173797 kubelet[2614]: E0707 01:14:01.172401 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.174716 kubelet[2614]: W0707 01:14:01.172427 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.174716 kubelet[2614]: E0707 01:14:01.172454 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.174716 kubelet[2614]: E0707 01:14:01.172822 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.174716 kubelet[2614]: W0707 01:14:01.172845 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.174716 kubelet[2614]: E0707 01:14:01.172928 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.174716 kubelet[2614]: E0707 01:14:01.173335 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.174716 kubelet[2614]: W0707 01:14:01.173357 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.174716 kubelet[2614]: E0707 01:14:01.173380 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.182363 kubelet[2614]: E0707 01:14:01.181997 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.182363 kubelet[2614]: W0707 01:14:01.182079 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.182363 kubelet[2614]: E0707 01:14:01.182144 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.185060 kubelet[2614]: E0707 01:14:01.183073 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.185060 kubelet[2614]: W0707 01:14:01.183102 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.185060 kubelet[2614]: E0707 01:14:01.183145 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.186965 kubelet[2614]: E0707 01:14:01.186354 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.186965 kubelet[2614]: W0707 01:14:01.186409 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.186965 kubelet[2614]: E0707 01:14:01.186440 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.197804 kubelet[2614]: E0707 01:14:01.197123 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.197804 kubelet[2614]: W0707 01:14:01.197197 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.197804 kubelet[2614]: E0707 01:14:01.197240 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.199225 kubelet[2614]: E0707 01:14:01.198971 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.201244 kubelet[2614]: W0707 01:14:01.200084 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.201244 kubelet[2614]: E0707 01:14:01.200152 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.203648 kubelet[2614]: E0707 01:14:01.203163 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.205140 kubelet[2614]: W0707 01:14:01.203838 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.205140 kubelet[2614]: E0707 01:14:01.203964 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.209327 kubelet[2614]: E0707 01:14:01.207767 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.209327 kubelet[2614]: W0707 01:14:01.207828 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.209327 kubelet[2614]: E0707 01:14:01.207981 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.214082 kubelet[2614]: E0707 01:14:01.213693 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.214082 kubelet[2614]: W0707 01:14:01.213751 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.214082 kubelet[2614]: E0707 01:14:01.213786 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.214596 kubelet[2614]: E0707 01:14:01.214313 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:14:01.214596 kubelet[2614]: W0707 01:14:01.214339 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:14:01.214596 kubelet[2614]: E0707 01:14:01.214364 2614 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:14:01.617056 kubelet[2614]: E0707 01:14:01.616974 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:01.773952 containerd[1462]: time="2025-07-07T01:14:01.773196570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:01.775305 containerd[1462]: time="2025-07-07T01:14:01.775270380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 01:14:01.776790 containerd[1462]: time="2025-07-07T01:14:01.776765164Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:01.779794 containerd[1462]: time="2025-07-07T01:14:01.779733551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:01.780805 containerd[1462]: time="2025-07-07T01:14:01.780679014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.139846202s" Jul 7 01:14:01.780805 containerd[1462]: time="2025-07-07T01:14:01.780725652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 01:14:01.791404 containerd[1462]: time="2025-07-07T01:14:01.791343337Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 01:14:01.821968 containerd[1462]: time="2025-07-07T01:14:01.821915219Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365\"" Jul 7 01:14:01.824093 containerd[1462]: time="2025-07-07T01:14:01.824048570Z" level=info msg="StartContainer for \"38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365\"" Jul 7 01:14:01.934231 systemd[1]: Started cri-containerd-38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365.scope - libcontainer container 38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365. Jul 7 01:14:01.974517 containerd[1462]: time="2025-07-07T01:14:01.974127023Z" level=info msg="StartContainer for \"38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365\" returns successfully" Jul 7 01:14:01.988772 systemd[1]: cri-containerd-38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365.scope: Deactivated successfully. Jul 7 01:14:02.022706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365-rootfs.mount: Deactivated successfully. Jul 7 01:14:02.765832 containerd[1462]: time="2025-07-07T01:14:02.765158926Z" level=info msg="shim disconnected" id=38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365 namespace=k8s.io Jul 7 01:14:02.765832 containerd[1462]: time="2025-07-07T01:14:02.765420987Z" level=warning msg="cleaning up after shim disconnected" id=38966d773289a403fbf6191ce7c86eb070dd86febab7a0e04eabe1bad4f95365 namespace=k8s.io Jul 7 01:14:02.765832 containerd[1462]: time="2025-07-07T01:14:02.765459499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:14:02.832964 containerd[1462]: time="2025-07-07T01:14:02.831443361Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:14:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:14:03.108649 containerd[1462]: time="2025-07-07T01:14:03.108229669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 01:14:03.618893 kubelet[2614]: E0707 01:14:03.618307 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:05.571072 kubelet[2614]: I0707 01:14:05.569983 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:14:05.615919 kubelet[2614]: E0707 01:14:05.615501 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:07.616311 kubelet[2614]: E0707 01:14:07.614778 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:09.498096 containerd[1462]: time="2025-07-07T01:14:09.497985935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:09.502564 containerd[1462]: time="2025-07-07T01:14:09.501469699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 01:14:09.503931 containerd[1462]: time="2025-07-07T01:14:09.503822993Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:09.515034 containerd[1462]: time="2025-07-07T01:14:09.514969169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:09.517001 containerd[1462]: time="2025-07-07T01:14:09.516949192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.408635667s" Jul 7 01:14:09.517001 containerd[1462]: time="2025-07-07T01:14:09.516986202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 01:14:09.532137 containerd[1462]: time="2025-07-07T01:14:09.532056387Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 01:14:09.566761 containerd[1462]: time="2025-07-07T01:14:09.566487626Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d\"" Jul 7 01:14:09.569829 containerd[1462]: time="2025-07-07T01:14:09.569119993Z" level=info msg="StartContainer for \"4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d\"" Jul 7 01:14:09.616334 kubelet[2614]: E0707 01:14:09.616282 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:09.648044 systemd[1]: run-containerd-runc-k8s.io-4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d-runc.ZRmtyQ.mount: Deactivated successfully. Jul 7 01:14:09.660052 systemd[1]: Started cri-containerd-4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d.scope - libcontainer container 4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d. Jul 7 01:14:09.729871 containerd[1462]: time="2025-07-07T01:14:09.718932679Z" level=info msg="StartContainer for \"4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d\" returns successfully" Jul 7 01:14:11.617946 kubelet[2614]: E0707 01:14:11.617124 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:12.400340 systemd[1]: cri-containerd-4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d.scope: Deactivated successfully. Jul 7 01:14:12.402371 systemd[1]: cri-containerd-4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d.scope: Consumed 2.110s CPU time. Jul 7 01:14:12.421500 kubelet[2614]: I0707 01:14:12.421360 2614 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 01:14:12.507239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d-rootfs.mount: Deactivated successfully. Jul 7 01:14:13.238727 kubelet[2614]: I0707 01:14:13.238603 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb2e7b-c332-4d91-8ab8-ad0544c36686-config-volume\") pod \"coredns-674b8bbfcf-79jhw\" (UID: \"7aeb2e7b-c332-4d91-8ab8-ad0544c36686\") " pod="kube-system/coredns-674b8bbfcf-79jhw" Jul 7 01:14:13.239646 kubelet[2614]: I0707 01:14:13.238739 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxr2\" (UniqueName: \"kubernetes.io/projected/7aeb2e7b-c332-4d91-8ab8-ad0544c36686-kube-api-access-zhxr2\") pod \"coredns-674b8bbfcf-79jhw\" (UID: \"7aeb2e7b-c332-4d91-8ab8-ad0544c36686\") " pod="kube-system/coredns-674b8bbfcf-79jhw" Jul 7 01:14:13.342038 kubelet[2614]: E0707 01:14:13.340383 2614 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Jul 7 01:14:13.342038 kubelet[2614]: E0707 01:14:13.341115 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7aeb2e7b-c332-4d91-8ab8-ad0544c36686-config-volume podName:7aeb2e7b-c332-4d91-8ab8-ad0544c36686 nodeName:}" failed. No retries permitted until 2025-07-07 01:14:13.840545335 +0000 UTC m=+42.458285814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7aeb2e7b-c332-4d91-8ab8-ad0544c36686-config-volume") pod "coredns-674b8bbfcf-79jhw" (UID: "7aeb2e7b-c332-4d91-8ab8-ad0544c36686") : object "kube-system"/"coredns" not registered Jul 7 01:14:13.351632 containerd[1462]: time="2025-07-07T01:14:13.351490835Z" level=info msg="shim disconnected" id=4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d namespace=k8s.io Jul 7 01:14:13.351632 containerd[1462]: time="2025-07-07T01:14:13.351625056Z" level=warning msg="cleaning up after shim disconnected" id=4880df4f69b7ca61fc7732584984f0560f6a6ec41dcf56b3b7a48ed5ef77e33d namespace=k8s.io Jul 7 01:14:13.357536 containerd[1462]: time="2025-07-07T01:14:13.351651476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:14:13.431349 systemd[1]: Created slice kubepods-burstable-pod7aeb2e7b_c332_4d91_8ab8_ad0544c36686.slice - libcontainer container kubepods-burstable-pod7aeb2e7b_c332_4d91_8ab8_ad0544c36686.slice. Jul 7 01:14:13.457964 systemd[1]: Created slice kubepods-besteffort-pod2314dc80_e996_40d7_ac0d_8b41b48a019a.slice - libcontainer container kubepods-besteffort-pod2314dc80_e996_40d7_ac0d_8b41b48a019a.slice. Jul 7 01:14:13.469232 systemd[1]: Created slice kubepods-besteffort-pod9b5c20f3_010e_455a_af88_ed3ca60a5bc4.slice - libcontainer container kubepods-besteffort-pod9b5c20f3_010e_455a_af88_ed3ca60a5bc4.slice. Jul 7 01:14:13.478699 containerd[1462]: time="2025-07-07T01:14:13.478620665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bz688,Uid:9b5c20f3-010e-455a-af88-ed3ca60a5bc4,Namespace:calico-system,Attempt:0,}" Jul 7 01:14:13.488673 systemd[1]: Created slice kubepods-besteffort-podb4af0965_443f_43ce_a1ac_716ddc78ed1f.slice - libcontainer container kubepods-besteffort-podb4af0965_443f_43ce_a1ac_716ddc78ed1f.slice. Jul 7 01:14:13.500548 systemd[1]: Created slice kubepods-besteffort-poddbefde5d_9c7b_4c5e_8e53_28982fa26375.slice - libcontainer container kubepods-besteffort-poddbefde5d_9c7b_4c5e_8e53_28982fa26375.slice. Jul 7 01:14:13.509411 systemd[1]: Created slice kubepods-besteffort-pod01d5654a_06ca_4bae_ada4_ae75fded948d.slice - libcontainer container kubepods-besteffort-pod01d5654a_06ca_4bae_ada4_ae75fded948d.slice. Jul 7 01:14:13.542639 kubelet[2614]: I0707 01:14:13.541683 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01d5654a-06ca-4bae-ada4-ae75fded948d-tigera-ca-bundle\") pod \"calico-kube-controllers-67d8445464-5nr6m\" (UID: \"01d5654a-06ca-4bae-ada4-ae75fded948d\") " pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" Jul 7 01:14:13.542639 kubelet[2614]: I0707 01:14:13.541739 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcl7\" (UniqueName: \"kubernetes.io/projected/01d5654a-06ca-4bae-ada4-ae75fded948d-kube-api-access-xgcl7\") pod \"calico-kube-controllers-67d8445464-5nr6m\" (UID: \"01d5654a-06ca-4bae-ada4-ae75fded948d\") " pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" Jul 7 01:14:13.542639 kubelet[2614]: I0707 01:14:13.541807 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4af0965-443f-43ce-a1ac-716ddc78ed1f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-fn6sw\" (UID: \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\") " pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:13.542639 kubelet[2614]: I0707 01:14:13.541833 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b4af0965-443f-43ce-a1ac-716ddc78ed1f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-fn6sw\" (UID: \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\") " pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:13.542639 kubelet[2614]: I0707 01:14:13.541897 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksscg\" (UniqueName: \"kubernetes.io/projected/b4af0965-443f-43ce-a1ac-716ddc78ed1f-kube-api-access-ksscg\") pod \"goldmane-768f4c5c69-fn6sw\" (UID: \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\") " pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:13.543053 kubelet[2614]: I0707 01:14:13.541946 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5f5\" (UniqueName: \"kubernetes.io/projected/2314dc80-e996-40d7-ac0d-8b41b48a019a-kube-api-access-cr5f5\") pod \"calico-apiserver-797f4f9b9c-srqgn\" (UID: \"2314dc80-e996-40d7-ac0d-8b41b48a019a\") " pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" Jul 7 01:14:13.543053 kubelet[2614]: I0707 01:14:13.541994 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-backend-key-pair\") pod \"whisker-7958f96868-f9lg9\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " pod="calico-system/whisker-7958f96868-f9lg9" Jul 7 01:14:13.543053 kubelet[2614]: I0707 01:14:13.542022 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-ca-bundle\") pod \"whisker-7958f96868-f9lg9\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " pod="calico-system/whisker-7958f96868-f9lg9" Jul 7 01:14:13.543053 kubelet[2614]: I0707 01:14:13.542085 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4af0965-443f-43ce-a1ac-716ddc78ed1f-config\") pod \"goldmane-768f4c5c69-fn6sw\" (UID: \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\") " pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:13.543053 kubelet[2614]: I0707 01:14:13.542112 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbefde5d-9c7b-4c5e-8e53-28982fa26375-calico-apiserver-certs\") pod \"calico-apiserver-797f4f9b9c-szk6r\" (UID: \"dbefde5d-9c7b-4c5e-8e53-28982fa26375\") " pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" Jul 7 01:14:13.543208 kubelet[2614]: I0707 01:14:13.542167 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2314dc80-e996-40d7-ac0d-8b41b48a019a-calico-apiserver-certs\") pod \"calico-apiserver-797f4f9b9c-srqgn\" (UID: \"2314dc80-e996-40d7-ac0d-8b41b48a019a\") " pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" Jul 7 01:14:13.543208 kubelet[2614]: I0707 01:14:13.542253 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59gvr\" (UniqueName: \"kubernetes.io/projected/dbefde5d-9c7b-4c5e-8e53-28982fa26375-kube-api-access-59gvr\") pod \"calico-apiserver-797f4f9b9c-szk6r\" (UID: \"dbefde5d-9c7b-4c5e-8e53-28982fa26375\") " pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" Jul 7 01:14:13.543208 kubelet[2614]: I0707 01:14:13.542281 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjlp\" (UniqueName: \"kubernetes.io/projected/4e752a49-252a-4f7c-8db2-273076e42d2e-kube-api-access-xdjlp\") pod \"whisker-7958f96868-f9lg9\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " pod="calico-system/whisker-7958f96868-f9lg9" Jul 7 01:14:13.554023 systemd[1]: Created slice kubepods-besteffort-pod4e752a49_252a_4f7c_8db2_273076e42d2e.slice - libcontainer container kubepods-besteffort-pod4e752a49_252a_4f7c_8db2_273076e42d2e.slice. Jul 7 01:14:13.569077 systemd[1]: Created slice kubepods-burstable-pod444d0803_585d_498b_a49e_969f9bbea4fc.slice - libcontainer container kubepods-burstable-pod444d0803_585d_498b_a49e_969f9bbea4fc.slice. Jul 7 01:14:13.643907 kubelet[2614]: I0707 01:14:13.642676 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/444d0803-585d-498b-a49e-969f9bbea4fc-config-volume\") pod \"coredns-674b8bbfcf-br5th\" (UID: \"444d0803-585d-498b-a49e-969f9bbea4fc\") " pod="kube-system/coredns-674b8bbfcf-br5th" Jul 7 01:14:13.643907 kubelet[2614]: I0707 01:14:13.643539 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgszs\" (UniqueName: \"kubernetes.io/projected/444d0803-585d-498b-a49e-969f9bbea4fc-kube-api-access-jgszs\") pod \"coredns-674b8bbfcf-br5th\" (UID: \"444d0803-585d-498b-a49e-969f9bbea4fc\") " pod="kube-system/coredns-674b8bbfcf-br5th" Jul 7 01:14:13.709881 containerd[1462]: time="2025-07-07T01:14:13.709353278Z" level=error msg="Failed to destroy network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.710880 containerd[1462]: time="2025-07-07T01:14:13.710527140Z" level=error msg="encountered an error cleaning up failed sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.710880 containerd[1462]: time="2025-07-07T01:14:13.710622609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bz688,Uid:9b5c20f3-010e-455a-af88-ed3ca60a5bc4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.711885 kubelet[2614]: E0707 01:14:13.711244 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.711885 kubelet[2614]: E0707 01:14:13.711382 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bz688" Jul 7 01:14:13.711885 kubelet[2614]: E0707 01:14:13.711430 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bz688" Jul 7 01:14:13.712035 kubelet[2614]: E0707 01:14:13.711524 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bz688_calico-system(9b5c20f3-010e-455a-af88-ed3ca60a5bc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bz688_calico-system(9b5c20f3-010e-455a-af88-ed3ca60a5bc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:13.766685 containerd[1462]: time="2025-07-07T01:14:13.765679000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-srqgn,Uid:2314dc80-e996-40d7-ac0d-8b41b48a019a,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:14:13.799442 containerd[1462]: time="2025-07-07T01:14:13.799390428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fn6sw,Uid:b4af0965-443f-43ce-a1ac-716ddc78ed1f,Namespace:calico-system,Attempt:0,}" Jul 7 01:14:13.806431 containerd[1462]: time="2025-07-07T01:14:13.806355130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-szk6r,Uid:dbefde5d-9c7b-4c5e-8e53-28982fa26375,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:14:13.846917 containerd[1462]: time="2025-07-07T01:14:13.846569105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d8445464-5nr6m,Uid:01d5654a-06ca-4bae-ada4-ae75fded948d,Namespace:calico-system,Attempt:0,}" Jul 7 01:14:13.866075 containerd[1462]: time="2025-07-07T01:14:13.866032399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7958f96868-f9lg9,Uid:4e752a49-252a-4f7c-8db2-273076e42d2e,Namespace:calico-system,Attempt:0,}" Jul 7 01:14:13.873724 containerd[1462]: time="2025-07-07T01:14:13.873669584Z" level=error msg="Failed to destroy network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.875645 containerd[1462]: time="2025-07-07T01:14:13.874904660Z" level=error msg="encountered an error cleaning up failed sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.875645 containerd[1462]: time="2025-07-07T01:14:13.874975954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-srqgn,Uid:2314dc80-e996-40d7-ac0d-8b41b48a019a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.875849 kubelet[2614]: E0707 01:14:13.875273 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:13.875849 kubelet[2614]: E0707 01:14:13.875316 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" Jul 7 01:14:13.875849 kubelet[2614]: E0707 01:14:13.875338 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" Jul 7 01:14:13.875994 kubelet[2614]: E0707 01:14:13.875393 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797f4f9b9c-srqgn_calico-apiserver(2314dc80-e996-40d7-ac0d-8b41b48a019a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797f4f9b9c-srqgn_calico-apiserver(2314dc80-e996-40d7-ac0d-8b41b48a019a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" podUID="2314dc80-e996-40d7-ac0d-8b41b48a019a" Jul 7 01:14:13.889586 containerd[1462]: time="2025-07-07T01:14:13.889505575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-br5th,Uid:444d0803-585d-498b-a49e-969f9bbea4fc,Namespace:kube-system,Attempt:0,}" Jul 7 01:14:14.030923 containerd[1462]: time="2025-07-07T01:14:14.030734589Z" level=error msg="Failed to destroy network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.031266 containerd[1462]: time="2025-07-07T01:14:14.031090627Z" level=error msg="encountered an error cleaning up failed sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.031266 containerd[1462]: time="2025-07-07T01:14:14.031211473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fn6sw,Uid:b4af0965-443f-43ce-a1ac-716ddc78ed1f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.032056 kubelet[2614]: E0707 01:14:14.031472 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.032056 kubelet[2614]: E0707 01:14:14.031547 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:14.032056 kubelet[2614]: E0707 01:14:14.031573 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fn6sw" Jul 7 01:14:14.032226 kubelet[2614]: E0707 01:14:14.031628 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-fn6sw_calico-system(b4af0965-443f-43ce-a1ac-716ddc78ed1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-fn6sw_calico-system(b4af0965-443f-43ce-a1ac-716ddc78ed1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fn6sw" podUID="b4af0965-443f-43ce-a1ac-716ddc78ed1f" Jul 7 01:14:14.041369 containerd[1462]: time="2025-07-07T01:14:14.041118444Z" level=error msg="Failed to destroy network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.042184 containerd[1462]: time="2025-07-07T01:14:14.041987844Z" level=error msg="encountered an error cleaning up failed sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.042439 containerd[1462]: time="2025-07-07T01:14:14.042164386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-szk6r,Uid:dbefde5d-9c7b-4c5e-8e53-28982fa26375,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.042972 kubelet[2614]: E0707 01:14:14.042921 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.043055 kubelet[2614]: E0707 01:14:14.043014 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" Jul 7 01:14:14.043091 kubelet[2614]: E0707 01:14:14.043077 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" Jul 7 01:14:14.043393 kubelet[2614]: E0707 01:14:14.043176 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797f4f9b9c-szk6r_calico-apiserver(dbefde5d-9c7b-4c5e-8e53-28982fa26375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797f4f9b9c-szk6r_calico-apiserver(dbefde5d-9c7b-4c5e-8e53-28982fa26375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" podUID="dbefde5d-9c7b-4c5e-8e53-28982fa26375" Jul 7 01:14:14.054410 containerd[1462]: time="2025-07-07T01:14:14.054278436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-79jhw,Uid:7aeb2e7b-c332-4d91-8ab8-ad0544c36686,Namespace:kube-system,Attempt:0,}" Jul 7 01:14:14.078168 containerd[1462]: time="2025-07-07T01:14:14.077978547Z" level=error msg="Failed to destroy network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.078707 containerd[1462]: time="2025-07-07T01:14:14.078501728Z" level=error msg="encountered an error cleaning up failed sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.078707 containerd[1462]: time="2025-07-07T01:14:14.078567101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d8445464-5nr6m,Uid:01d5654a-06ca-4bae-ada4-ae75fded948d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.079135 kubelet[2614]: E0707 01:14:14.078850 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.081091 kubelet[2614]: E0707 01:14:14.079272 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" Jul 7 01:14:14.081091 kubelet[2614]: E0707 01:14:14.079306 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" Jul 7 01:14:14.081091 kubelet[2614]: E0707 01:14:14.079380 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d8445464-5nr6m_calico-system(01d5654a-06ca-4bae-ada4-ae75fded948d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d8445464-5nr6m_calico-system(01d5654a-06ca-4bae-ada4-ae75fded948d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" podUID="01d5654a-06ca-4bae-ada4-ae75fded948d" Jul 7 01:14:14.118456 containerd[1462]: time="2025-07-07T01:14:14.117989460Z" level=error msg="Failed to destroy network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.118618 containerd[1462]: time="2025-07-07T01:14:14.118529964Z" level=error msg="Failed to destroy network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.118785 containerd[1462]: time="2025-07-07T01:14:14.118740239Z" level=error msg="encountered an error cleaning up failed sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.119046 containerd[1462]: time="2025-07-07T01:14:14.118890811Z" level=error msg="encountered an error cleaning up failed sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.119046 containerd[1462]: time="2025-07-07T01:14:14.118921949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7958f96868-f9lg9,Uid:4e752a49-252a-4f7c-8db2-273076e42d2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.119046 containerd[1462]: time="2025-07-07T01:14:14.118961694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-br5th,Uid:444d0803-585d-498b-a49e-969f9bbea4fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.119807 kubelet[2614]: E0707 01:14:14.119195 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.119807 kubelet[2614]: E0707 01:14:14.119264 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-br5th" Jul 7 01:14:14.119807 kubelet[2614]: E0707 01:14:14.119289 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-br5th" Jul 7 01:14:14.120007 kubelet[2614]: E0707 01:14:14.119345 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-br5th_kube-system(444d0803-585d-498b-a49e-969f9bbea4fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-br5th_kube-system(444d0803-585d-498b-a49e-969f9bbea4fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-br5th" podUID="444d0803-585d-498b-a49e-969f9bbea4fc" Jul 7 01:14:14.120007 kubelet[2614]: E0707 01:14:14.119644 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.120007 kubelet[2614]: E0707 01:14:14.119676 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7958f96868-f9lg9" Jul 7 01:14:14.120235 kubelet[2614]: E0707 01:14:14.119699 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7958f96868-f9lg9" Jul 7 01:14:14.122001 kubelet[2614]: E0707 01:14:14.121301 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7958f96868-f9lg9_calico-system(4e752a49-252a-4f7c-8db2-273076e42d2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7958f96868-f9lg9_calico-system(4e752a49-252a-4f7c-8db2-273076e42d2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7958f96868-f9lg9" podUID="4e752a49-252a-4f7c-8db2-273076e42d2e" Jul 7 01:14:14.163276 containerd[1462]: time="2025-07-07T01:14:14.163151634Z" level=error msg="Failed to destroy network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.163804 containerd[1462]: time="2025-07-07T01:14:14.163657533Z" level=error msg="encountered an error cleaning up failed sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.163804 containerd[1462]: time="2025-07-07T01:14:14.163710222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-79jhw,Uid:7aeb2e7b-c332-4d91-8ab8-ad0544c36686,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.164005 kubelet[2614]: E0707 01:14:14.163949 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.164078 kubelet[2614]: E0707 01:14:14.164013 2614 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-79jhw" Jul 7 01:14:14.164078 kubelet[2614]: E0707 01:14:14.164036 2614 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-79jhw" Jul 7 01:14:14.165079 kubelet[2614]: E0707 01:14:14.164130 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-79jhw_kube-system(7aeb2e7b-c332-4d91-8ab8-ad0544c36686)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-79jhw_kube-system(7aeb2e7b-c332-4d91-8ab8-ad0544c36686)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-79jhw" podUID="7aeb2e7b-c332-4d91-8ab8-ad0544c36686" Jul 7 01:14:14.342163 kubelet[2614]: I0707 01:14:14.341976 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:14.349419 containerd[1462]: time="2025-07-07T01:14:14.344547563Z" level=info msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" Jul 7 01:14:14.349419 containerd[1462]: time="2025-07-07T01:14:14.345010081Z" level=info msg="Ensure that sandbox c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7 in task-service has been cleanup successfully" Jul 7 01:14:14.383912 kubelet[2614]: I0707 01:14:14.379637 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:14.384169 containerd[1462]: time="2025-07-07T01:14:14.381428726Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:14.384169 containerd[1462]: time="2025-07-07T01:14:14.383657185Z" level=info msg="Ensure that sandbox 3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4 in task-service has been cleanup successfully" Jul 7 01:14:14.389994 kubelet[2614]: I0707 01:14:14.389188 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:14.394279 containerd[1462]: time="2025-07-07T01:14:14.394183719Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:14:14.403815 containerd[1462]: time="2025-07-07T01:14:14.403747306Z" level=info msg="Ensure that sandbox b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699 in task-service has been cleanup successfully" Jul 7 01:14:14.436534 containerd[1462]: time="2025-07-07T01:14:14.435573377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 01:14:14.447024 kubelet[2614]: I0707 01:14:14.445031 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:14.451292 containerd[1462]: time="2025-07-07T01:14:14.450453235Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:14:14.451557 containerd[1462]: time="2025-07-07T01:14:14.451523763Z" level=info msg="Ensure that sandbox 1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209 in task-service has been cleanup successfully" Jul 7 01:14:14.457929 kubelet[2614]: I0707 01:14:14.457777 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:14.458730 containerd[1462]: time="2025-07-07T01:14:14.458690495Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:14:14.459951 containerd[1462]: time="2025-07-07T01:14:14.459914771Z" level=info msg="Ensure that sandbox 60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2 in task-service has been cleanup successfully" Jul 7 01:14:14.466957 kubelet[2614]: I0707 01:14:14.466759 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:14.469133 containerd[1462]: time="2025-07-07T01:14:14.468680522Z" level=info msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" Jul 7 01:14:14.470842 containerd[1462]: time="2025-07-07T01:14:14.470794567Z" level=info msg="Ensure that sandbox 960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a in task-service has been cleanup successfully" Jul 7 01:14:14.473738 kubelet[2614]: I0707 01:14:14.473460 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:14.475221 containerd[1462]: time="2025-07-07T01:14:14.475171156Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:14:14.478279 containerd[1462]: time="2025-07-07T01:14:14.478244169Z" level=info msg="Ensure that sandbox 8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318 in task-service has been cleanup successfully" Jul 7 01:14:14.490217 kubelet[2614]: I0707 01:14:14.490176 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:14.493435 containerd[1462]: time="2025-07-07T01:14:14.492437369Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:14:14.493435 containerd[1462]: time="2025-07-07T01:14:14.492678411Z" level=info msg="Ensure that sandbox 199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4 in task-service has been cleanup successfully" Jul 7 01:14:14.548074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2-shm.mount: Deactivated successfully. Jul 7 01:14:14.570156 containerd[1462]: time="2025-07-07T01:14:14.570079713Z" level=error msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" failed" error="failed to destroy network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.570454 kubelet[2614]: E0707 01:14:14.570343 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:14.571301 kubelet[2614]: E0707 01:14:14.571118 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7"} Jul 7 01:14:14.571301 kubelet[2614]: E0707 01:14:14.571219 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbefde5d-9c7b-4c5e-8e53-28982fa26375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.571301 kubelet[2614]: E0707 01:14:14.571283 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbefde5d-9c7b-4c5e-8e53-28982fa26375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" podUID="dbefde5d-9c7b-4c5e-8e53-28982fa26375" Jul 7 01:14:14.577270 containerd[1462]: time="2025-07-07T01:14:14.576908580Z" level=error msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" failed" error="failed to destroy network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.577407 kubelet[2614]: E0707 01:14:14.577247 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:14.577407 kubelet[2614]: E0707 01:14:14.577323 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699"} Jul 7 01:14:14.577407 kubelet[2614]: E0707 01:14:14.577362 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2314dc80-e996-40d7-ac0d-8b41b48a019a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.577407 kubelet[2614]: E0707 01:14:14.577395 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2314dc80-e996-40d7-ac0d-8b41b48a019a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" podUID="2314dc80-e996-40d7-ac0d-8b41b48a019a" Jul 7 01:14:14.600320 containerd[1462]: time="2025-07-07T01:14:14.597932441Z" level=error msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" failed" error="failed to destroy network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.600452 kubelet[2614]: E0707 01:14:14.598321 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:14.600452 kubelet[2614]: E0707 01:14:14.598386 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4"} Jul 7 01:14:14.600452 kubelet[2614]: E0707 01:14:14.598431 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e752a49-252a-4f7c-8db2-273076e42d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.600452 kubelet[2614]: E0707 01:14:14.598474 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e752a49-252a-4f7c-8db2-273076e42d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7958f96868-f9lg9" podUID="4e752a49-252a-4f7c-8db2-273076e42d2e" Jul 7 01:14:14.606552 containerd[1462]: time="2025-07-07T01:14:14.606473150Z" level=error msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" failed" error="failed to destroy network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.607384 kubelet[2614]: E0707 01:14:14.606820 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:14.607384 kubelet[2614]: E0707 01:14:14.606912 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209"} Jul 7 01:14:14.607384 kubelet[2614]: E0707 01:14:14.606954 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01d5654a-06ca-4bae-ada4-ae75fded948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.607384 kubelet[2614]: E0707 01:14:14.606985 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01d5654a-06ca-4bae-ada4-ae75fded948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" podUID="01d5654a-06ca-4bae-ada4-ae75fded948d" Jul 7 01:14:14.633420 containerd[1462]: time="2025-07-07T01:14:14.633328348Z" level=error msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" failed" error="failed to destroy network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.633963 kubelet[2614]: E0707 01:14:14.633854 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:14.634055 kubelet[2614]: E0707 01:14:14.634025 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318"} Jul 7 01:14:14.634146 kubelet[2614]: E0707 01:14:14.634119 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"444d0803-585d-498b-a49e-969f9bbea4fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.634225 kubelet[2614]: E0707 01:14:14.634194 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"444d0803-585d-498b-a49e-969f9bbea4fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-br5th" podUID="444d0803-585d-498b-a49e-969f9bbea4fc" Jul 7 01:14:14.639083 containerd[1462]: time="2025-07-07T01:14:14.639001660Z" level=error msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" failed" error="failed to destroy network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.639589 kubelet[2614]: E0707 01:14:14.639422 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:14.639589 kubelet[2614]: E0707 01:14:14.639479 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4"} Jul 7 01:14:14.639589 kubelet[2614]: E0707 01:14:14.639524 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.639589 kubelet[2614]: E0707 01:14:14.639555 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fn6sw" podUID="b4af0965-443f-43ce-a1ac-716ddc78ed1f" Jul 7 01:14:14.640961 containerd[1462]: time="2025-07-07T01:14:14.640733007Z" level=error msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" failed" error="failed to destroy network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.641366 kubelet[2614]: E0707 01:14:14.641070 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:14.641428 kubelet[2614]: E0707 01:14:14.641263 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2"} Jul 7 01:14:14.641459 kubelet[2614]: E0707 01:14:14.641422 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.641745 kubelet[2614]: E0707 01:14:14.641705 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:14.647100 containerd[1462]: time="2025-07-07T01:14:14.647050916Z" level=error msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" failed" error="failed to destroy network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:14.647337 kubelet[2614]: E0707 01:14:14.647295 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:14.647396 kubelet[2614]: E0707 01:14:14.647351 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a"} Jul 7 01:14:14.647426 kubelet[2614]: E0707 01:14:14.647389 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7aeb2e7b-c332-4d91-8ab8-ad0544c36686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:14.647501 kubelet[2614]: E0707 01:14:14.647416 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7aeb2e7b-c332-4d91-8ab8-ad0544c36686\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-79jhw" podUID="7aeb2e7b-c332-4d91-8ab8-ad0544c36686" Jul 7 01:14:25.621027 containerd[1462]: time="2025-07-07T01:14:25.618703532Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:14:25.621027 containerd[1462]: time="2025-07-07T01:14:25.618994498Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:25.815176 containerd[1462]: time="2025-07-07T01:14:25.815108054Z" level=error msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" failed" error="failed to destroy network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:25.817144 containerd[1462]: time="2025-07-07T01:14:25.817069022Z" level=error msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" failed" error="failed to destroy network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:25.818168 kubelet[2614]: E0707 01:14:25.817761 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:25.818168 kubelet[2614]: E0707 01:14:25.818013 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209"} Jul 7 01:14:25.818995 kubelet[2614]: E0707 01:14:25.818425 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:25.818995 kubelet[2614]: E0707 01:14:25.818508 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4"} Jul 7 01:14:25.818995 kubelet[2614]: E0707 01:14:25.818561 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e752a49-252a-4f7c-8db2-273076e42d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:25.818995 kubelet[2614]: E0707 01:14:25.818625 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e752a49-252a-4f7c-8db2-273076e42d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7958f96868-f9lg9" podUID="4e752a49-252a-4f7c-8db2-273076e42d2e" Jul 7 01:14:25.820170 kubelet[2614]: E0707 01:14:25.818134 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01d5654a-06ca-4bae-ada4-ae75fded948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:25.820170 kubelet[2614]: E0707 01:14:25.819228 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01d5654a-06ca-4bae-ada4-ae75fded948d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" podUID="01d5654a-06ca-4bae-ada4-ae75fded948d" Jul 7 01:14:26.619795 containerd[1462]: time="2025-07-07T01:14:26.619316940Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:14:26.756732 containerd[1462]: time="2025-07-07T01:14:26.756684978Z" level=error msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" failed" error="failed to destroy network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:26.817167 kubelet[2614]: E0707 01:14:26.814524 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:26.817167 kubelet[2614]: E0707 01:14:26.814614 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318"} Jul 7 01:14:26.817167 kubelet[2614]: E0707 01:14:26.814672 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"444d0803-585d-498b-a49e-969f9bbea4fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:26.817167 kubelet[2614]: E0707 01:14:26.814705 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"444d0803-585d-498b-a49e-969f9bbea4fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-br5th" podUID="444d0803-585d-498b-a49e-969f9bbea4fc" Jul 7 01:14:27.620233 containerd[1462]: time="2025-07-07T01:14:27.619634753Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:14:27.625041 containerd[1462]: time="2025-07-07T01:14:27.624787246Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:14:27.636291 containerd[1462]: time="2025-07-07T01:14:27.635909877Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:14:27.698886 containerd[1462]: time="2025-07-07T01:14:27.698628267Z" level=error msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" failed" error="failed to destroy network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:27.700875 kubelet[2614]: E0707 01:14:27.700725 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:27.702197 kubelet[2614]: E0707 01:14:27.700925 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2"} Jul 7 01:14:27.702197 kubelet[2614]: E0707 01:14:27.701001 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:27.702197 kubelet[2614]: E0707 01:14:27.701055 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b5c20f3-010e-455a-af88-ed3ca60a5bc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bz688" podUID="9b5c20f3-010e-455a-af88-ed3ca60a5bc4" Jul 7 01:14:27.737685 containerd[1462]: time="2025-07-07T01:14:27.737190522Z" level=error msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" failed" error="failed to destroy network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:27.738414 kubelet[2614]: E0707 01:14:27.738212 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:27.738414 kubelet[2614]: E0707 01:14:27.738268 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699"} Jul 7 01:14:27.738414 kubelet[2614]: E0707 01:14:27.738323 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2314dc80-e996-40d7-ac0d-8b41b48a019a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:27.738414 kubelet[2614]: E0707 01:14:27.738365 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2314dc80-e996-40d7-ac0d-8b41b48a019a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" podUID="2314dc80-e996-40d7-ac0d-8b41b48a019a" Jul 7 01:14:27.755931 containerd[1462]: time="2025-07-07T01:14:27.755674148Z" level=error msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" failed" error="failed to destroy network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:14:27.758898 kubelet[2614]: E0707 01:14:27.758663 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:27.758898 kubelet[2614]: E0707 01:14:27.758760 2614 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4"} Jul 7 01:14:27.758898 kubelet[2614]: E0707 01:14:27.758801 2614 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:14:27.758898 kubelet[2614]: E0707 01:14:27.758838 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4af0965-443f-43ce-a1ac-716ddc78ed1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fn6sw" podUID="b4af0965-443f-43ce-a1ac-716ddc78ed1f" Jul 7 01:14:27.858090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978670434.mount: Deactivated successfully. Jul 7 01:14:27.928167 containerd[1462]: time="2025-07-07T01:14:27.927025472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:27.929831 containerd[1462]: time="2025-07-07T01:14:27.929747627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 01:14:27.931457 containerd[1462]: time="2025-07-07T01:14:27.931429862Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:27.935185 containerd[1462]: time="2025-07-07T01:14:27.935115465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:27.936071 containerd[1462]: time="2025-07-07T01:14:27.936027816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.492670108s" Jul 7 01:14:27.936209 containerd[1462]: time="2025-07-07T01:14:27.936187626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 01:14:27.996028 containerd[1462]: time="2025-07-07T01:14:27.995946715Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 01:14:28.039540 containerd[1462]: time="2025-07-07T01:14:28.039411875Z" level=info msg="CreateContainer within sandbox \"1dbe9797a0d8059c0d703c2382b2a82c9909d0ca1e55b29fb34e06030ecac119\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c\"" Jul 7 01:14:28.041956 containerd[1462]: time="2025-07-07T01:14:28.040968946Z" level=info msg="StartContainer for \"91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c\"" Jul 7 01:14:28.110047 systemd[1]: Started cri-containerd-91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c.scope - libcontainer container 91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c. Jul 7 01:14:28.160943 containerd[1462]: time="2025-07-07T01:14:28.160879150Z" level=info msg="StartContainer for \"91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c\" returns successfully" Jul 7 01:14:28.332497 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 01:14:28.333043 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 01:14:28.503125 containerd[1462]: time="2025-07-07T01:14:28.503072452Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:28.597600 kubelet[2614]: I0707 01:14:28.596983 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g49jb" podStartSLOduration=1.430280806 podStartE2EDuration="33.59665063s" podCreationTimestamp="2025-07-07 01:13:55 +0000 UTC" firstStartedPulling="2025-07-07 01:13:55.771028442 +0000 UTC m=+24.388768871" lastFinishedPulling="2025-07-07 01:14:27.937398256 +0000 UTC m=+56.555138695" observedRunningTime="2025-07-07 01:14:28.595644924 +0000 UTC m=+57.213385363" watchObservedRunningTime="2025-07-07 01:14:28.59665063 +0000 UTC m=+57.214391069" Jul 7 01:14:28.620028 containerd[1462]: time="2025-07-07T01:14:28.618157347Z" level=info msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.755 [INFO][4000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.757 [INFO][4000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" iface="eth0" netns="/var/run/netns/cni-5c7c1ca3-c16c-1101-87a3-9d3874957fae" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.758 [INFO][4000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" iface="eth0" netns="/var/run/netns/cni-5c7c1ca3-c16c-1101-87a3-9d3874957fae" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][4000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" iface="eth0" netns="/var/run/netns/cni-5c7c1ca3-c16c-1101-87a3-9d3874957fae" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][4000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][4000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.881 [INFO][4015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.881 [INFO][4015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.882 [INFO][4015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.898 [WARNING][4015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.898 [INFO][4015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.901 [INFO][4015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:28.912012 containerd[1462]: 2025-07-07 01:14:28.910 [INFO][4000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:28.914936 containerd[1462]: time="2025-07-07T01:14:28.913083077Z" level=info msg="TearDown network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" successfully" Jul 7 01:14:28.914936 containerd[1462]: time="2025-07-07T01:14:28.913122080Z" level=info msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" returns successfully" Jul 7 01:14:28.916880 containerd[1462]: time="2025-07-07T01:14:28.915626828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-79jhw,Uid:7aeb2e7b-c332-4d91-8ab8-ad0544c36686,Namespace:kube-system,Attempt:1,}" Jul 7 01:14:28.916258 systemd[1]: run-netns-cni\x2d5c7c1ca3\x2dc16c\x2d1101\x2d87a3\x2d9d3874957fae.mount: Deactivated successfully. Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.759 [INFO][3969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.759 [INFO][3969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" iface="eth0" netns="/var/run/netns/cni-f1806b9a-7171-13c7-fe69-29e2db9c4e13" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.759 [INFO][3969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" iface="eth0" netns="/var/run/netns/cni-f1806b9a-7171-13c7-fe69-29e2db9c4e13" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][3969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" iface="eth0" netns="/var/run/netns/cni-f1806b9a-7171-13c7-fe69-29e2db9c4e13" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][3969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.760 [INFO][3969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.896 [INFO][4014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.897 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.901 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.919 [WARNING][4014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.919 [INFO][4014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.924 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:28.930526 containerd[1462]: 2025-07-07 01:14:28.926 [INFO][3969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:28.933018 containerd[1462]: time="2025-07-07T01:14:28.932932434Z" level=info msg="TearDown network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" successfully" Jul 7 01:14:28.933018 containerd[1462]: time="2025-07-07T01:14:28.932998268Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" returns successfully" Jul 7 01:14:28.933740 systemd[1]: run-netns-cni\x2df1806b9a\x2d7171\x2d13c7\x2dfe69\x2d29e2db9c4e13.mount: Deactivated successfully. Jul 7 01:14:28.970772 kubelet[2614]: I0707 01:14:28.970175 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjlp\" (UniqueName: \"kubernetes.io/projected/4e752a49-252a-4f7c-8db2-273076e42d2e-kube-api-access-xdjlp\") pod \"4e752a49-252a-4f7c-8db2-273076e42d2e\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " Jul 7 01:14:28.970772 kubelet[2614]: I0707 01:14:28.970253 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-backend-key-pair\") pod \"4e752a49-252a-4f7c-8db2-273076e42d2e\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " Jul 7 01:14:28.970772 kubelet[2614]: I0707 01:14:28.970280 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-ca-bundle\") pod \"4e752a49-252a-4f7c-8db2-273076e42d2e\" (UID: \"4e752a49-252a-4f7c-8db2-273076e42d2e\") " Jul 7 01:14:28.974980 kubelet[2614]: I0707 01:14:28.974304 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4e752a49-252a-4f7c-8db2-273076e42d2e" (UID: "4e752a49-252a-4f7c-8db2-273076e42d2e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 01:14:28.981242 kubelet[2614]: I0707 01:14:28.981190 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4e752a49-252a-4f7c-8db2-273076e42d2e" (UID: "4e752a49-252a-4f7c-8db2-273076e42d2e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 01:14:28.986316 kubelet[2614]: I0707 01:14:28.986235 2614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e752a49-252a-4f7c-8db2-273076e42d2e-kube-api-access-xdjlp" (OuterVolumeSpecName: "kube-api-access-xdjlp") pod "4e752a49-252a-4f7c-8db2-273076e42d2e" (UID: "4e752a49-252a-4f7c-8db2-273076e42d2e"). InnerVolumeSpecName "kube-api-access-xdjlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 01:14:29.070876 kubelet[2614]: I0707 01:14:29.070782 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xdjlp\" (UniqueName: \"kubernetes.io/projected/4e752a49-252a-4f7c-8db2-273076e42d2e-kube-api-access-xdjlp\") on node \"ci-4081-3-4-0-2961e92ed0.novalocal\" DevicePath \"\"" Jul 7 01:14:29.070876 kubelet[2614]: I0707 01:14:29.070821 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-backend-key-pair\") on node \"ci-4081-3-4-0-2961e92ed0.novalocal\" DevicePath \"\"" Jul 7 01:14:29.070876 kubelet[2614]: I0707 01:14:29.070833 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e752a49-252a-4f7c-8db2-273076e42d2e-whisker-ca-bundle\") on node \"ci-4081-3-4-0-2961e92ed0.novalocal\" DevicePath \"\"" Jul 7 01:14:29.191622 systemd-networkd[1370]: calicd32ff24919: Link UP Jul 7 01:14:29.194826 systemd-networkd[1370]: calicd32ff24919: Gained carrier Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:28.997 [INFO][4031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.012 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0 coredns-674b8bbfcf- kube-system 7aeb2e7b-c332-4d91-8ab8-ad0544c36686 926 0 2025-07-07 01:13:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal coredns-674b8bbfcf-79jhw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd32ff24919 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.013 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.059 [INFO][4044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" HandleID="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.059 [INFO][4044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" HandleID="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"coredns-674b8bbfcf-79jhw", "timestamp":"2025-07-07 01:14:29.059115023 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.059 [INFO][4044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.059 [INFO][4044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.059 [INFO][4044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.125 [INFO][4044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.136 [INFO][4044] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.147 [INFO][4044] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.150 [INFO][4044] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.152 [INFO][4044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.152 [INFO][4044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.155 [INFO][4044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480 Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.161 [INFO][4044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.170 [INFO][4044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.129/26] block=192.168.99.128/26 handle="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.170 [INFO][4044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.129/26] handle="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.170 [INFO][4044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:29.212503 containerd[1462]: 2025-07-07 01:14:29.170 [INFO][4044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.129/26] IPv6=[] ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" HandleID="k8s-pod-network.ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.173 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeb2e7b-c332-4d91-8ab8-ad0544c36686", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-79jhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd32ff24919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.173 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.129/32] ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.173 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd32ff24919 ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.187 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.187 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeb2e7b-c332-4d91-8ab8-ad0544c36686", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480", Pod:"coredns-674b8bbfcf-79jhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd32ff24919", MAC:"46:2d:be:a8:49:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:29.214939 containerd[1462]: 2025-07-07 01:14:29.209 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480" Namespace="kube-system" Pod="coredns-674b8bbfcf-79jhw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:29.236654 containerd[1462]: time="2025-07-07T01:14:29.236521252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:29.236654 containerd[1462]: time="2025-07-07T01:14:29.236605460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:29.236654 containerd[1462]: time="2025-07-07T01:14:29.236628243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:29.238189 containerd[1462]: time="2025-07-07T01:14:29.236713853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:29.258124 systemd[1]: Started cri-containerd-ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480.scope - libcontainer container ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480. Jul 7 01:14:29.301634 containerd[1462]: time="2025-07-07T01:14:29.301589079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-79jhw,Uid:7aeb2e7b-c332-4d91-8ab8-ad0544c36686,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480\"" Jul 7 01:14:29.313103 containerd[1462]: time="2025-07-07T01:14:29.313041107Z" level=info msg="CreateContainer within sandbox \"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:14:29.334799 containerd[1462]: time="2025-07-07T01:14:29.334760693Z" level=info msg="CreateContainer within sandbox \"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88a04c3367c33b2e62023fe188c42078763f68b79a30acc382d862da34043aaa\"" Jul 7 01:14:29.335626 containerd[1462]: time="2025-07-07T01:14:29.335589868Z" level=info msg="StartContainer for \"88a04c3367c33b2e62023fe188c42078763f68b79a30acc382d862da34043aaa\"" Jul 7 01:14:29.361013 systemd[1]: Started cri-containerd-88a04c3367c33b2e62023fe188c42078763f68b79a30acc382d862da34043aaa.scope - libcontainer container 88a04c3367c33b2e62023fe188c42078763f68b79a30acc382d862da34043aaa. Jul 7 01:14:29.390681 containerd[1462]: time="2025-07-07T01:14:29.390635357Z" level=info msg="StartContainer for \"88a04c3367c33b2e62023fe188c42078763f68b79a30acc382d862da34043aaa\" returns successfully" Jul 7 01:14:29.595300 systemd[1]: Removed slice kubepods-besteffort-pod4e752a49_252a_4f7c_8db2_273076e42d2e.slice - libcontainer container kubepods-besteffort-pod4e752a49_252a_4f7c_8db2_273076e42d2e.slice. Jul 7 01:14:29.627902 containerd[1462]: time="2025-07-07T01:14:29.625773253Z" level=info msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" Jul 7 01:14:29.668313 kubelet[2614]: I0707 01:14:29.666844 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-79jhw" podStartSLOduration=52.666819065 podStartE2EDuration="52.666819065s" podCreationTimestamp="2025-07-07 01:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:14:29.666733916 +0000 UTC m=+58.284474355" watchObservedRunningTime="2025-07-07 01:14:29.666819065 +0000 UTC m=+58.284559504" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.731 [INFO][4170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.731 [INFO][4170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" iface="eth0" netns="/var/run/netns/cni-3703c03e-e77f-c54a-3899-1c8eae7ec51c" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.731 [INFO][4170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" iface="eth0" netns="/var/run/netns/cni-3703c03e-e77f-c54a-3899-1c8eae7ec51c" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.731 [INFO][4170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" iface="eth0" netns="/var/run/netns/cni-3703c03e-e77f-c54a-3899-1c8eae7ec51c" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.732 [INFO][4170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.732 [INFO][4170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.771 [INFO][4184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.773 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.773 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.791 [WARNING][4184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.793 [INFO][4184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.796 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:29.804204 containerd[1462]: 2025-07-07 01:14:29.801 [INFO][4170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:29.808018 containerd[1462]: time="2025-07-07T01:14:29.805470540Z" level=info msg="TearDown network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" successfully" Jul 7 01:14:29.808018 containerd[1462]: time="2025-07-07T01:14:29.805557583Z" level=info msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" returns successfully" Jul 7 01:14:29.808018 containerd[1462]: time="2025-07-07T01:14:29.807456455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-szk6r,Uid:dbefde5d-9c7b-4c5e-8e53-28982fa26375,Namespace:calico-apiserver,Attempt:1,}" Jul 7 01:14:29.809755 systemd[1]: Created slice kubepods-besteffort-pod28973294_e67c_4bc4_ae97_0b668cabdc6d.slice - libcontainer container kubepods-besteffort-pod28973294_e67c_4bc4_ae97_0b668cabdc6d.slice. Jul 7 01:14:29.877206 systemd[1]: run-netns-cni\x2d3703c03e\x2de77f\x2dc54a\x2d3899\x2d1c8eae7ec51c.mount: Deactivated successfully. Jul 7 01:14:29.877361 systemd[1]: var-lib-kubelet-pods-4e752a49\x2d252a\x2d4f7c\x2d8db2\x2d273076e42d2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdjlp.mount: Deactivated successfully. Jul 7 01:14:29.877476 systemd[1]: var-lib-kubelet-pods-4e752a49\x2d252a\x2d4f7c\x2d8db2\x2d273076e42d2e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 01:14:29.880193 kubelet[2614]: I0707 01:14:29.879141 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28973294-e67c-4bc4-ae97-0b668cabdc6d-whisker-backend-key-pair\") pod \"whisker-9589d579b-t8m2k\" (UID: \"28973294-e67c-4bc4-ae97-0b668cabdc6d\") " pod="calico-system/whisker-9589d579b-t8m2k" Jul 7 01:14:29.880193 kubelet[2614]: I0707 01:14:29.879195 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28973294-e67c-4bc4-ae97-0b668cabdc6d-whisker-ca-bundle\") pod \"whisker-9589d579b-t8m2k\" (UID: \"28973294-e67c-4bc4-ae97-0b668cabdc6d\") " pod="calico-system/whisker-9589d579b-t8m2k" Jul 7 01:14:29.880193 kubelet[2614]: I0707 01:14:29.879217 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbxx\" (UniqueName: \"kubernetes.io/projected/28973294-e67c-4bc4-ae97-0b668cabdc6d-kube-api-access-4rbxx\") pod \"whisker-9589d579b-t8m2k\" (UID: \"28973294-e67c-4bc4-ae97-0b668cabdc6d\") " pod="calico-system/whisker-9589d579b-t8m2k" Jul 7 01:14:30.041663 systemd-networkd[1370]: cali9d588af08ff: Link UP Jul 7 01:14:30.043194 systemd-networkd[1370]: cali9d588af08ff: Gained carrier Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.887 [INFO][4193] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.902 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0 calico-apiserver-797f4f9b9c- calico-apiserver dbefde5d-9c7b-4c5e-8e53-28982fa26375 954 0 2025-07-07 01:13:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797f4f9b9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal calico-apiserver-797f4f9b9c-szk6r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d588af08ff [] [] }} ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.903 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.946 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" HandleID="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.946 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" HandleID="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"calico-apiserver-797f4f9b9c-szk6r", "timestamp":"2025-07-07 01:14:29.946116755 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.946 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.946 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.946 [INFO][4206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.960 [INFO][4206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.969 [INFO][4206] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.978 [INFO][4206] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.989 [INFO][4206] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.993 [INFO][4206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.995 [INFO][4206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:29.999 [INFO][4206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5 Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:30.011 [INFO][4206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:30.031 [INFO][4206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.130/26] block=192.168.99.128/26 handle="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:30.031 [INFO][4206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.130/26] handle="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:30.031 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:30.060311 containerd[1462]: 2025-07-07 01:14:30.031 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.130/26] IPv6=[] ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" HandleID="k8s-pod-network.f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.036 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbefde5d-9c7b-4c5e-8e53-28982fa26375", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"calico-apiserver-797f4f9b9c-szk6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d588af08ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.036 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.130/32] ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.036 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d588af08ff ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.040 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.041 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbefde5d-9c7b-4c5e-8e53-28982fa26375", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5", Pod:"calico-apiserver-797f4f9b9c-szk6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d588af08ff", MAC:"2a:c8:a5:10:fc:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:30.062215 containerd[1462]: 2025-07-07 01:14:30.055 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-szk6r" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:30.092997 containerd[1462]: time="2025-07-07T01:14:30.092499188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:30.092997 containerd[1462]: time="2025-07-07T01:14:30.092657405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:30.092997 containerd[1462]: time="2025-07-07T01:14:30.092672303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:30.092997 containerd[1462]: time="2025-07-07T01:14:30.092926790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:30.124153 containerd[1462]: time="2025-07-07T01:14:30.123160374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9589d579b-t8m2k,Uid:28973294-e67c-4bc4-ae97-0b668cabdc6d,Namespace:calico-system,Attempt:0,}" Jul 7 01:14:30.135194 systemd[1]: Started cri-containerd-f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5.scope - libcontainer container f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5. Jul 7 01:14:30.261348 containerd[1462]: time="2025-07-07T01:14:30.260890930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-szk6r,Uid:dbefde5d-9c7b-4c5e-8e53-28982fa26375,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5\"" Jul 7 01:14:30.269157 containerd[1462]: time="2025-07-07T01:14:30.268983489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 01:14:30.389061 systemd-networkd[1370]: calibdc5bb176c9: Link UP Jul 7 01:14:30.391825 systemd-networkd[1370]: calibdc5bb176c9: Gained carrier Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.194 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.215 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0 whisker-9589d579b- calico-system 28973294-e67c-4bc4-ae97-0b668cabdc6d 966 0 2025-07-07 01:14:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9589d579b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal whisker-9589d579b-t8m2k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibdc5bb176c9 [] [] }} ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.215 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.302 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" HandleID="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.303 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" HandleID="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"whisker-9589d579b-t8m2k", "timestamp":"2025-07-07 01:14:30.302973687 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.303 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.303 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.303 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.320 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.328 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.342 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.345 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.349 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.350 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.352 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.358 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.370 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.131/26] block=192.168.99.128/26 handle="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.370 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.131/26] handle="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.370 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:30.431887 containerd[1462]: 2025-07-07 01:14:30.370 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.131/26] IPv6=[] ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" HandleID="k8s-pod-network.8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.373 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0", GenerateName:"whisker-9589d579b-", Namespace:"calico-system", SelfLink:"", UID:"28973294-e67c-4bc4-ae97-0b668cabdc6d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9589d579b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"whisker-9589d579b-t8m2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibdc5bb176c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.375 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.131/32] ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.375 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdc5bb176c9 ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.394 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.395 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0", GenerateName:"whisker-9589d579b-", Namespace:"calico-system", SelfLink:"", UID:"28973294-e67c-4bc4-ae97-0b668cabdc6d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9589d579b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a", Pod:"whisker-9589d579b-t8m2k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibdc5bb176c9", MAC:"ba:a6:35:a7:f9:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:30.432698 containerd[1462]: 2025-07-07 01:14:30.422 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a" Namespace="calico-system" Pod="whisker-9589d579b-t8m2k" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--9589d579b--t8m2k-eth0" Jul 7 01:14:30.842457 containerd[1462]: time="2025-07-07T01:14:30.842194127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:30.842619 containerd[1462]: time="2025-07-07T01:14:30.842332907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:30.844607 containerd[1462]: time="2025-07-07T01:14:30.842427475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:30.845356 containerd[1462]: time="2025-07-07T01:14:30.845309410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:30.879657 systemd[1]: run-containerd-runc-k8s.io-f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5-runc.erlSb8.mount: Deactivated successfully. Jul 7 01:14:30.905251 systemd[1]: Started cri-containerd-8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a.scope - libcontainer container 8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a. Jul 7 01:14:30.943973 kernel: bpftool[4443]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 01:14:30.999246 containerd[1462]: time="2025-07-07T01:14:30.999129522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9589d579b-t8m2k,Uid:28973294-e67c-4bc4-ae97-0b668cabdc6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a\"" Jul 7 01:14:31.088616 systemd-networkd[1370]: calicd32ff24919: Gained IPv6LL Jul 7 01:14:31.336788 systemd-networkd[1370]: vxlan.calico: Link UP Jul 7 01:14:31.336805 systemd-networkd[1370]: vxlan.calico: Gained carrier Jul 7 01:14:31.664458 systemd-networkd[1370]: cali9d588af08ff: Gained IPv6LL Jul 7 01:14:31.712894 kubelet[2614]: I0707 01:14:31.712123 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e752a49-252a-4f7c-8db2-273076e42d2e" path="/var/lib/kubelet/pods/4e752a49-252a-4f7c-8db2-273076e42d2e/volumes" Jul 7 01:14:31.727380 containerd[1462]: time="2025-07-07T01:14:31.725020155Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.808 [WARNING][4510] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.809 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.809 [INFO][4510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" iface="eth0" netns="" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.809 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.809 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.859 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.859 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.859 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.870 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.871 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.874 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:31.880924 containerd[1462]: 2025-07-07 01:14:31.878 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.881971 containerd[1462]: time="2025-07-07T01:14:31.881943297Z" level=info msg="TearDown network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" successfully" Jul 7 01:14:31.882082 containerd[1462]: time="2025-07-07T01:14:31.882064204Z" level=info msg="StopPodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" returns successfully" Jul 7 01:14:31.883392 containerd[1462]: time="2025-07-07T01:14:31.883359313Z" level=info msg="RemovePodSandbox for \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:31.883787 containerd[1462]: time="2025-07-07T01:14:31.883510547Z" level=info msg="Forcibly stopping sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\"" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.928 [WARNING][4556] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.928 [INFO][4556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.928 [INFO][4556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" iface="eth0" netns="" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.928 [INFO][4556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.928 [INFO][4556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.963 [INFO][4564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.963 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.963 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.974 [WARNING][4564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.974 [INFO][4564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" HandleID="k8s-pod-network.3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-whisker--7958f96868--f9lg9-eth0" Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.977 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:31.981070 containerd[1462]: 2025-07-07 01:14:31.979 [INFO][4556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4" Jul 7 01:14:31.981070 containerd[1462]: time="2025-07-07T01:14:31.981045235Z" level=info msg="TearDown network for sandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" successfully" Jul 7 01:14:31.992090 containerd[1462]: time="2025-07-07T01:14:31.992014708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:14:31.992184 containerd[1462]: time="2025-07-07T01:14:31.992124825Z" level=info msg="RemovePodSandbox \"3c668bae3a320a772201c53c9dadc6fca0c0e8ac566cb2bbabc6ee70ff5735b4\" returns successfully" Jul 7 01:14:31.992954 containerd[1462]: time="2025-07-07T01:14:31.992883497Z" level=info msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.040 [WARNING][4578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbefde5d-9c7b-4c5e-8e53-28982fa26375", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5", Pod:"calico-apiserver-797f4f9b9c-szk6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d588af08ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.041 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.041 [INFO][4578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" iface="eth0" netns="" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.041 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.041 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.086 [INFO][4585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.086 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.087 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.099 [WARNING][4585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.099 [INFO][4585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.102 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:32.106302 containerd[1462]: 2025-07-07 01:14:32.104 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.107199 containerd[1462]: time="2025-07-07T01:14:32.106279153Z" level=info msg="TearDown network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" successfully" Jul 7 01:14:32.107199 containerd[1462]: time="2025-07-07T01:14:32.106639158Z" level=info msg="StopPodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" returns successfully" Jul 7 01:14:32.108601 containerd[1462]: time="2025-07-07T01:14:32.108028484Z" level=info msg="RemovePodSandbox for \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" Jul 7 01:14:32.108601 containerd[1462]: time="2025-07-07T01:14:32.108077756Z" level=info msg="Forcibly stopping sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\"" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.167 [WARNING][4600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbefde5d-9c7b-4c5e-8e53-28982fa26375", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5", Pod:"calico-apiserver-797f4f9b9c-szk6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d588af08ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.167 [INFO][4600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.167 [INFO][4600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" iface="eth0" netns="" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.167 [INFO][4600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.167 [INFO][4600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.198 [INFO][4607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.198 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.198 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.209 [WARNING][4607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.209 [INFO][4607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" HandleID="k8s-pod-network.c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--szk6r-eth0" Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.211 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:32.214795 containerd[1462]: 2025-07-07 01:14:32.213 [INFO][4600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7" Jul 7 01:14:32.216425 containerd[1462]: time="2025-07-07T01:14:32.214959044Z" level=info msg="TearDown network for sandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" successfully" Jul 7 01:14:32.218727 containerd[1462]: time="2025-07-07T01:14:32.218692757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:14:32.218832 containerd[1462]: time="2025-07-07T01:14:32.218759042Z" level=info msg="RemovePodSandbox \"c245eafc6d5a541c9ef214fe496839ecbb79544ce402d62f180ae8b4f1d582d7\" returns successfully" Jul 7 01:14:32.219327 containerd[1462]: time="2025-07-07T01:14:32.219295358Z" level=info msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" Jul 7 01:14:32.304013 systemd-networkd[1370]: calibdc5bb176c9: Gained IPv6LL Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.275 [WARNING][4621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeb2e7b-c332-4d91-8ab8-ad0544c36686", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480", Pod:"coredns-674b8bbfcf-79jhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd32ff24919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.275 [INFO][4621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.275 [INFO][4621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" iface="eth0" netns="" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.275 [INFO][4621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.275 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.309 [INFO][4628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.309 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.309 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.319 [WARNING][4628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.319 [INFO][4628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.321 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:32.324556 containerd[1462]: 2025-07-07 01:14:32.323 [INFO][4621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.325837 containerd[1462]: time="2025-07-07T01:14:32.324709462Z" level=info msg="TearDown network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" successfully" Jul 7 01:14:32.325837 containerd[1462]: time="2025-07-07T01:14:32.324738967Z" level=info msg="StopPodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" returns successfully" Jul 7 01:14:32.325837 containerd[1462]: time="2025-07-07T01:14:32.325409295Z" level=info msg="RemovePodSandbox for \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" Jul 7 01:14:32.325837 containerd[1462]: time="2025-07-07T01:14:32.325444501Z" level=info msg="Forcibly stopping sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\"" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.369 [WARNING][4642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeb2e7b-c332-4d91-8ab8-ad0544c36686", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"ac63cec24e195f196ec821fea34473c3f94dded7b0fa0e6a9daaceb5d11af480", Pod:"coredns-674b8bbfcf-79jhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd32ff24919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.369 [INFO][4642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.369 [INFO][4642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" iface="eth0" netns="" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.369 [INFO][4642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.369 [INFO][4642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.395 [INFO][4650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.395 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.395 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.403 [WARNING][4650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.403 [INFO][4650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" HandleID="k8s-pod-network.960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--79jhw-eth0" Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.405 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:32.408656 containerd[1462]: 2025-07-07 01:14:32.407 [INFO][4642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a" Jul 7 01:14:32.409340 containerd[1462]: time="2025-07-07T01:14:32.408706855Z" level=info msg="TearDown network for sandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" successfully" Jul 7 01:14:32.413600 containerd[1462]: time="2025-07-07T01:14:32.413546855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:14:32.413688 containerd[1462]: time="2025-07-07T01:14:32.413617097Z" level=info msg="RemovePodSandbox \"960a09f4edbe70348a8b3a1e3a21c9165cf21bd8d2a2f6e652960351267eae1a\" returns successfully" Jul 7 01:14:33.136015 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jul 7 01:14:35.464754 containerd[1462]: time="2025-07-07T01:14:35.463597591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:35.465269 containerd[1462]: time="2025-07-07T01:14:35.465042717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 01:14:35.466388 containerd[1462]: time="2025-07-07T01:14:35.466353672Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:35.470407 containerd[1462]: time="2025-07-07T01:14:35.470360735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:35.471877 containerd[1462]: time="2025-07-07T01:14:35.471821481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.202768912s" Jul 7 01:14:35.471945 containerd[1462]: time="2025-07-07T01:14:35.471877126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 01:14:35.480324 containerd[1462]: time="2025-07-07T01:14:35.480283459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 01:14:35.520101 containerd[1462]: time="2025-07-07T01:14:35.520047916Z" level=info msg="CreateContainer within sandbox \"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:14:35.552341 containerd[1462]: time="2025-07-07T01:14:35.552300028Z" level=info msg="CreateContainer within sandbox \"f67b4b2763248ae5d76ffaacc8974e3bb20cf1831abe6001402eae517e8754f5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a4c63eb90c90f400d95f5886d25da9778306898256ab5cef4e4ccaf0de4a0a2e\"" Jul 7 01:14:35.556332 containerd[1462]: time="2025-07-07T01:14:35.556308034Z" level=info msg="StartContainer for \"a4c63eb90c90f400d95f5886d25da9778306898256ab5cef4e4ccaf0de4a0a2e\"" Jul 7 01:14:35.617678 systemd[1]: Started cri-containerd-a4c63eb90c90f400d95f5886d25da9778306898256ab5cef4e4ccaf0de4a0a2e.scope - libcontainer container a4c63eb90c90f400d95f5886d25da9778306898256ab5cef4e4ccaf0de4a0a2e. Jul 7 01:14:35.698706 containerd[1462]: time="2025-07-07T01:14:35.698551728Z" level=info msg="StartContainer for \"a4c63eb90c90f400d95f5886d25da9778306898256ab5cef4e4ccaf0de4a0a2e\" returns successfully" Jul 7 01:14:36.625910 containerd[1462]: time="2025-07-07T01:14:36.625782113Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:14:36.718891 kubelet[2614]: I0707 01:14:36.717926 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797f4f9b9c-szk6r" podStartSLOduration=40.502662532 podStartE2EDuration="45.7146923s" podCreationTimestamp="2025-07-07 01:13:51 +0000 UTC" firstStartedPulling="2025-07-07 01:14:30.267941604 +0000 UTC m=+58.885682034" lastFinishedPulling="2025-07-07 01:14:35.479971373 +0000 UTC m=+64.097711802" observedRunningTime="2025-07-07 01:14:36.713823407 +0000 UTC m=+65.331563846" watchObservedRunningTime="2025-07-07 01:14:36.7146923 +0000 UTC m=+65.332432729" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.768 [INFO][4711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.768 [INFO][4711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" iface="eth0" netns="/var/run/netns/cni-fd22a1d4-430e-53f9-526d-9e008c787ca2" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.769 [INFO][4711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" iface="eth0" netns="/var/run/netns/cni-fd22a1d4-430e-53f9-526d-9e008c787ca2" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.770 [INFO][4711] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" iface="eth0" netns="/var/run/netns/cni-fd22a1d4-430e-53f9-526d-9e008c787ca2" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.770 [INFO][4711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.770 [INFO][4711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.847 [INFO][4720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.847 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.847 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.876 [WARNING][4720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.876 [INFO][4720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.879 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:36.885374 containerd[1462]: 2025-07-07 01:14:36.881 [INFO][4711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:14:36.885374 containerd[1462]: time="2025-07-07T01:14:36.884430633Z" level=info msg="TearDown network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" successfully" Jul 7 01:14:36.885374 containerd[1462]: time="2025-07-07T01:14:36.884459508Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" returns successfully" Jul 7 01:14:36.890699 containerd[1462]: time="2025-07-07T01:14:36.889081787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d8445464-5nr6m,Uid:01d5654a-06ca-4bae-ada4-ae75fded948d,Namespace:calico-system,Attempt:1,}" Jul 7 01:14:36.891721 systemd[1]: run-netns-cni\x2dfd22a1d4\x2d430e\x2d53f9\x2d526d\x2d9e008c787ca2.mount: Deactivated successfully. Jul 7 01:14:37.120433 systemd-networkd[1370]: calidc97365cd2f: Link UP Jul 7 01:14:37.120709 systemd-networkd[1370]: calidc97365cd2f: Gained carrier Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:36.992 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0 calico-kube-controllers-67d8445464- calico-system 01d5654a-06ca-4bae-ada4-ae75fded948d 995 0 2025-07-07 01:13:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67d8445464 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal calico-kube-controllers-67d8445464-5nr6m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc97365cd2f [] [] }} ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:36.994 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.034 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" HandleID="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.035 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" HandleID="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"calico-kube-controllers-67d8445464-5nr6m", "timestamp":"2025-07-07 01:14:37.034637206 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.035 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.035 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.036 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.053 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.064 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.073 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.076 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.082 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.083 [INFO][4747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.086 [INFO][4747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.098 [INFO][4747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.111 [INFO][4747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.132/26] block=192.168.99.128/26 handle="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.112 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.132/26] handle="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.112 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:37.149227 containerd[1462]: 2025-07-07 01:14:37.112 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.132/26] IPv6=[] ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" HandleID="k8s-pod-network.d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.115 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0", GenerateName:"calico-kube-controllers-67d8445464-", Namespace:"calico-system", SelfLink:"", UID:"01d5654a-06ca-4bae-ada4-ae75fded948d", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d8445464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"calico-kube-controllers-67d8445464-5nr6m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc97365cd2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.116 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.132/32] ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.116 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc97365cd2f ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.121 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.125 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0", GenerateName:"calico-kube-controllers-67d8445464-", Namespace:"calico-system", SelfLink:"", UID:"01d5654a-06ca-4bae-ada4-ae75fded948d", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d8445464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e", Pod:"calico-kube-controllers-67d8445464-5nr6m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc97365cd2f", MAC:"7e:aa:84:5b:69:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:37.152021 containerd[1462]: 2025-07-07 01:14:37.139 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e" Namespace="calico-system" Pod="calico-kube-controllers-67d8445464-5nr6m" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:14:37.189874 containerd[1462]: time="2025-07-07T01:14:37.188748263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:37.190586 containerd[1462]: time="2025-07-07T01:14:37.190128999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:37.190586 containerd[1462]: time="2025-07-07T01:14:37.190165567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:37.190586 containerd[1462]: time="2025-07-07T01:14:37.190281476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:37.231237 systemd[1]: Started cri-containerd-d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e.scope - libcontainer container d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e. Jul 7 01:14:37.323118 containerd[1462]: time="2025-07-07T01:14:37.322316406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d8445464-5nr6m,Uid:01d5654a-06ca-4bae-ada4-ae75fded948d,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e\"" Jul 7 01:14:37.807224 containerd[1462]: time="2025-07-07T01:14:37.807178547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:37.810959 containerd[1462]: time="2025-07-07T01:14:37.810911965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 01:14:37.818880 containerd[1462]: time="2025-07-07T01:14:37.818153738Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:37.825194 containerd[1462]: time="2025-07-07T01:14:37.825126895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:37.827006 containerd[1462]: time="2025-07-07T01:14:37.826956264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.346635865s" Jul 7 01:14:37.827132 containerd[1462]: time="2025-07-07T01:14:37.827008342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 01:14:37.828892 containerd[1462]: time="2025-07-07T01:14:37.828096327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 01:14:37.838495 containerd[1462]: time="2025-07-07T01:14:37.837876270Z" level=info msg="CreateContainer within sandbox \"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 01:14:37.940553 containerd[1462]: time="2025-07-07T01:14:37.937802014Z" level=info msg="CreateContainer within sandbox \"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"022c9d0daaa1efe61f608f610b3ee79528d9e87296b2afcd5c95427f97b3e49e\"" Jul 7 01:14:37.940553 containerd[1462]: time="2025-07-07T01:14:37.939712786Z" level=info msg="StartContainer for \"022c9d0daaa1efe61f608f610b3ee79528d9e87296b2afcd5c95427f97b3e49e\"" Jul 7 01:14:37.944632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875402693.mount: Deactivated successfully. Jul 7 01:14:38.001134 systemd[1]: Started cri-containerd-022c9d0daaa1efe61f608f610b3ee79528d9e87296b2afcd5c95427f97b3e49e.scope - libcontainer container 022c9d0daaa1efe61f608f610b3ee79528d9e87296b2afcd5c95427f97b3e49e. Jul 7 01:14:38.141028 containerd[1462]: time="2025-07-07T01:14:38.139762130Z" level=info msg="StartContainer for \"022c9d0daaa1efe61f608f610b3ee79528d9e87296b2afcd5c95427f97b3e49e\" returns successfully" Jul 7 01:14:38.619080 containerd[1462]: time="2025-07-07T01:14:38.617976018Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:14:38.768319 systemd-networkd[1370]: calidc97365cd2f: Gained IPv6LL Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.773 [INFO][4860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.773 [INFO][4860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" iface="eth0" netns="/var/run/netns/cni-875eb2e2-404f-2ebe-bb35-0380088ccc3e" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.773 [INFO][4860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" iface="eth0" netns="/var/run/netns/cni-875eb2e2-404f-2ebe-bb35-0380088ccc3e" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.774 [INFO][4860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" iface="eth0" netns="/var/run/netns/cni-875eb2e2-404f-2ebe-bb35-0380088ccc3e" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.774 [INFO][4860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.774 [INFO][4860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.818 [INFO][4867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.818 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.818 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.829 [WARNING][4867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.830 [INFO][4867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.833 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:38.838364 containerd[1462]: 2025-07-07 01:14:38.836 [INFO][4860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:14:38.838364 containerd[1462]: time="2025-07-07T01:14:38.838673385Z" level=info msg="TearDown network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" successfully" Jul 7 01:14:38.838364 containerd[1462]: time="2025-07-07T01:14:38.838707559Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" returns successfully" Jul 7 01:14:38.845578 containerd[1462]: time="2025-07-07T01:14:38.842180899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fn6sw,Uid:b4af0965-443f-43ce-a1ac-716ddc78ed1f,Namespace:calico-system,Attempt:1,}" Jul 7 01:14:38.851738 systemd[1]: run-netns-cni\x2d875eb2e2\x2d404f\x2d2ebe\x2dbb35\x2d0380088ccc3e.mount: Deactivated successfully. Jul 7 01:14:39.067214 systemd-networkd[1370]: caliddaab354699: Link UP Jul 7 01:14:39.069049 systemd-networkd[1370]: caliddaab354699: Gained carrier Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.939 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0 goldmane-768f4c5c69- calico-system b4af0965-443f-43ce-a1ac-716ddc78ed1f 1016 0 2025-07-07 01:13:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal goldmane-768f4c5c69-fn6sw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliddaab354699 [] [] }} ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.939 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.978 [INFO][4885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" HandleID="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.978 [INFO][4885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" HandleID="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"goldmane-768f4c5c69-fn6sw", "timestamp":"2025-07-07 01:14:38.978608409 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.979 [INFO][4885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.979 [INFO][4885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.979 [INFO][4885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:38.991 [INFO][4885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.004 [INFO][4885] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.014 [INFO][4885] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.017 [INFO][4885] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.023 [INFO][4885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.023 [INFO][4885] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.029 [INFO][4885] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.049 [INFO][4885] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.058 [INFO][4885] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.133/26] block=192.168.99.128/26 handle="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.058 [INFO][4885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.133/26] handle="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.058 [INFO][4885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:39.109358 containerd[1462]: 2025-07-07 01:14:39.058 [INFO][4885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.133/26] IPv6=[] ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" HandleID="k8s-pod-network.c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.060 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4af0965-443f-43ce-a1ac-716ddc78ed1f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"goldmane-768f4c5c69-fn6sw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddaab354699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.061 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.133/32] ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.061 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddaab354699 ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.069 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.070 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4af0965-443f-43ce-a1ac-716ddc78ed1f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af", Pod:"goldmane-768f4c5c69-fn6sw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddaab354699", MAC:"ce:20:04:97:65:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:39.110899 containerd[1462]: 2025-07-07 01:14:39.103 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af" Namespace="calico-system" Pod="goldmane-768f4c5c69-fn6sw" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:14:39.149322 containerd[1462]: time="2025-07-07T01:14:39.148718484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:39.149322 containerd[1462]: time="2025-07-07T01:14:39.148883405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:39.149322 containerd[1462]: time="2025-07-07T01:14:39.148913441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:39.152516 containerd[1462]: time="2025-07-07T01:14:39.151211109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:39.196231 systemd[1]: Started cri-containerd-c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af.scope - libcontainer container c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af. Jul 7 01:14:39.257736 containerd[1462]: time="2025-07-07T01:14:39.257654120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fn6sw,Uid:b4af0965-443f-43ce-a1ac-716ddc78ed1f,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af\"" Jul 7 01:14:39.622099 containerd[1462]: time="2025-07-07T01:14:39.620534446Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" iface="eth0" netns="/var/run/netns/cni-9b974350-9b2e-94e5-4839-d08d65a02b3c" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" iface="eth0" netns="/var/run/netns/cni-9b974350-9b2e-94e5-4839-d08d65a02b3c" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" iface="eth0" netns="/var/run/netns/cni-9b974350-9b2e-94e5-4839-d08d65a02b3c" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.741 [INFO][4952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.792 [INFO][4960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.792 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.792 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.809 [WARNING][4960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.809 [INFO][4960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.813 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:39.819901 containerd[1462]: 2025-07-07 01:14:39.814 [INFO][4952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:14:39.819901 containerd[1462]: time="2025-07-07T01:14:39.816293044Z" level=info msg="TearDown network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" successfully" Jul 7 01:14:39.819901 containerd[1462]: time="2025-07-07T01:14:39.816324383Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" returns successfully" Jul 7 01:14:39.819901 containerd[1462]: time="2025-07-07T01:14:39.817186073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-srqgn,Uid:2314dc80-e996-40d7-ac0d-8b41b48a019a,Namespace:calico-apiserver,Attempt:1,}" Jul 7 01:14:39.821751 systemd[1]: run-netns-cni\x2d9b974350\x2d9b2e\x2d94e5\x2d4839\x2dd08d65a02b3c.mount: Deactivated successfully. Jul 7 01:14:40.602504 systemd-networkd[1370]: cali5a784b29338: Link UP Jul 7 01:14:40.609364 systemd-networkd[1370]: cali5a784b29338: Gained carrier Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.478 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0 calico-apiserver-797f4f9b9c- calico-apiserver 2314dc80-e996-40d7-ac0d-8b41b48a019a 1025 0 2025-07-07 01:13:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797f4f9b9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal calico-apiserver-797f4f9b9c-srqgn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a784b29338 [] [] }} ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.478 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.522 [INFO][4978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" HandleID="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.522 [INFO][4978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" HandleID="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"calico-apiserver-797f4f9b9c-srqgn", "timestamp":"2025-07-07 01:14:40.52259237 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.523 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.523 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.523 [INFO][4978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.541 [INFO][4978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.551 [INFO][4978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.560 [INFO][4978] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.563 [INFO][4978] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.566 [INFO][4978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.566 [INFO][4978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.569 [INFO][4978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.576 [INFO][4978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.591 [INFO][4978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.134/26] block=192.168.99.128/26 handle="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.591 [INFO][4978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.134/26] handle="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.591 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:40.648354 containerd[1462]: 2025-07-07 01:14:40.592 [INFO][4978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.134/26] IPv6=[] ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" HandleID="k8s-pod-network.2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.594 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2314dc80-e996-40d7-ac0d-8b41b48a019a", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"calico-apiserver-797f4f9b9c-srqgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a784b29338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.594 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.134/32] ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.595 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a784b29338 ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.612 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.617 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2314dc80-e996-40d7-ac0d-8b41b48a019a", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e", Pod:"calico-apiserver-797f4f9b9c-srqgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a784b29338", MAC:"7e:56:52:67:f6:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:40.650717 containerd[1462]: 2025-07-07 01:14:40.644 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e" Namespace="calico-apiserver" Pod="calico-apiserver-797f4f9b9c-srqgn" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:14:40.705524 containerd[1462]: time="2025-07-07T01:14:40.704787998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:40.705524 containerd[1462]: time="2025-07-07T01:14:40.705422641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:40.705948 containerd[1462]: time="2025-07-07T01:14:40.705782337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:40.706056 containerd[1462]: time="2025-07-07T01:14:40.705929564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:40.742085 systemd[1]: Started cri-containerd-2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e.scope - libcontainer container 2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e. Jul 7 01:14:40.805008 containerd[1462]: time="2025-07-07T01:14:40.804959500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f4f9b9c-srqgn,Uid:2314dc80-e996-40d7-ac0d-8b41b48a019a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e\"" Jul 7 01:14:40.820153 containerd[1462]: time="2025-07-07T01:14:40.819816121Z" level=info msg="CreateContainer within sandbox \"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:14:40.848518 containerd[1462]: time="2025-07-07T01:14:40.848475724Z" level=info msg="CreateContainer within sandbox \"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b94b8650f1b21fe0f61200f262e4fe4f88600f589f87697bce2a97ca96f4d9a5\"" Jul 7 01:14:40.850806 containerd[1462]: time="2025-07-07T01:14:40.850685578Z" level=info msg="StartContainer for \"b94b8650f1b21fe0f61200f262e4fe4f88600f589f87697bce2a97ca96f4d9a5\"" Jul 7 01:14:40.882248 systemd-networkd[1370]: caliddaab354699: Gained IPv6LL Jul 7 01:14:40.914492 systemd[1]: Started cri-containerd-b94b8650f1b21fe0f61200f262e4fe4f88600f589f87697bce2a97ca96f4d9a5.scope - libcontainer container b94b8650f1b21fe0f61200f262e4fe4f88600f589f87697bce2a97ca96f4d9a5. Jul 7 01:14:41.005614 containerd[1462]: time="2025-07-07T01:14:41.005549658Z" level=info msg="StartContainer for \"b94b8650f1b21fe0f61200f262e4fe4f88600f589f87697bce2a97ca96f4d9a5\" returns successfully" Jul 7 01:14:41.618195 containerd[1462]: time="2025-07-07T01:14:41.617452251Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.740 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.740 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" iface="eth0" netns="/var/run/netns/cni-91c6e40b-dd0d-8762-9a4c-9cef76899cfa" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.741 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" iface="eth0" netns="/var/run/netns/cni-91c6e40b-dd0d-8762-9a4c-9cef76899cfa" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.741 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" iface="eth0" netns="/var/run/netns/cni-91c6e40b-dd0d-8762-9a4c-9cef76899cfa" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.741 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.741 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.889 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.890 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.890 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.906 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.906 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.908 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:41.915315 containerd[1462]: 2025-07-07 01:14:41.910 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:14:41.920972 containerd[1462]: time="2025-07-07T01:14:41.918049501Z" level=info msg="TearDown network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" successfully" Jul 7 01:14:41.920972 containerd[1462]: time="2025-07-07T01:14:41.918113331Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" returns successfully" Jul 7 01:14:41.920972 containerd[1462]: time="2025-07-07T01:14:41.919314889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-br5th,Uid:444d0803-585d-498b-a49e-969f9bbea4fc,Namespace:kube-system,Attempt:1,}" Jul 7 01:14:41.920686 systemd[1]: run-netns-cni\x2d91c6e40b\x2ddd0d\x2d8762\x2d9a4c\x2d9cef76899cfa.mount: Deactivated successfully. Jul 7 01:14:42.288292 systemd-networkd[1370]: cali5a784b29338: Gained IPv6LL Jul 7 01:14:42.336259 systemd-networkd[1370]: califbb9adba274: Link UP Jul 7 01:14:42.336765 systemd-networkd[1370]: califbb9adba274: Gained carrier Jul 7 01:14:42.374029 kubelet[2614]: I0707 01:14:42.373305 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797f4f9b9c-srqgn" podStartSLOduration=51.373202909 podStartE2EDuration="51.373202909s" podCreationTimestamp="2025-07-07 01:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:14:41.767343319 +0000 UTC m=+70.385083749" watchObservedRunningTime="2025-07-07 01:14:42.373202909 +0000 UTC m=+70.990943358" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.065 [INFO][5107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0 coredns-674b8bbfcf- kube-system 444d0803-585d-498b-a49e-969f9bbea4fc 1036 0 2025-07-07 01:13:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal coredns-674b8bbfcf-br5th eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbb9adba274 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.065 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.206 [INFO][5121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" HandleID="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.206 [INFO][5121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" HandleID="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003150e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"coredns-674b8bbfcf-br5th", "timestamp":"2025-07-07 01:14:42.204140859 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.206 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.206 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.206 [INFO][5121] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.243 [INFO][5121] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.258 [INFO][5121] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.271 [INFO][5121] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.277 [INFO][5121] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.291 [INFO][5121] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.291 [INFO][5121] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.298 [INFO][5121] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9 Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.310 [INFO][5121] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.324 [INFO][5121] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.135/26] block=192.168.99.128/26 handle="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.324 [INFO][5121] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.135/26] handle="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.324 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:42.387709 containerd[1462]: 2025-07-07 01:14:42.324 [INFO][5121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.135/26] IPv6=[] ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" HandleID="k8s-pod-network.37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.328 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444d0803-585d-498b-a49e-969f9bbea4fc", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"coredns-674b8bbfcf-br5th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb9adba274", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.329 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.135/32] ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.330 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbb9adba274 ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.337 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.340 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444d0803-585d-498b-a49e-969f9bbea4fc", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9", Pod:"coredns-674b8bbfcf-br5th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb9adba274", MAC:"5a:f2:2e:37:f7:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:42.388477 containerd[1462]: 2025-07-07 01:14:42.379 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9" Namespace="kube-system" Pod="coredns-674b8bbfcf-br5th" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:14:42.450722 containerd[1462]: time="2025-07-07T01:14:42.449325036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:42.450722 containerd[1462]: time="2025-07-07T01:14:42.449398153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:42.450722 containerd[1462]: time="2025-07-07T01:14:42.449419393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:42.450722 containerd[1462]: time="2025-07-07T01:14:42.450526844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:42.531532 systemd[1]: Started cri-containerd-37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9.scope - libcontainer container 37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9. Jul 7 01:14:42.617105 containerd[1462]: time="2025-07-07T01:14:42.616185699Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:14:42.658496 containerd[1462]: time="2025-07-07T01:14:42.658445042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-br5th,Uid:444d0803-585d-498b-a49e-969f9bbea4fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9\"" Jul 7 01:14:42.678072 containerd[1462]: time="2025-07-07T01:14:42.678029795Z" level=info msg="CreateContainer within sandbox \"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:14:42.724133 containerd[1462]: time="2025-07-07T01:14:42.722992719Z" level=info msg="CreateContainer within sandbox \"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85\"" Jul 7 01:14:42.731615 containerd[1462]: time="2025-07-07T01:14:42.731506769Z" level=info msg="StartContainer for \"dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85\"" Jul 7 01:14:42.756629 kubelet[2614]: I0707 01:14:42.756591 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:14:42.905563 systemd[1]: Started cri-containerd-dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85.scope - libcontainer container dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85. Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.849 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.849 [INFO][5190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" iface="eth0" netns="/var/run/netns/cni-8c134a25-ce6a-d66f-a646-af93745ec470" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.850 [INFO][5190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" iface="eth0" netns="/var/run/netns/cni-8c134a25-ce6a-d66f-a646-af93745ec470" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.851 [INFO][5190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" iface="eth0" netns="/var/run/netns/cni-8c134a25-ce6a-d66f-a646-af93745ec470" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.851 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.851 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.933 [INFO][5215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.935 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.935 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.961 [WARNING][5215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.961 [INFO][5215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.966 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:42.974245 containerd[1462]: 2025-07-07 01:14:42.970 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:14:42.975243 containerd[1462]: time="2025-07-07T01:14:42.975196178Z" level=info msg="TearDown network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" successfully" Jul 7 01:14:42.975319 containerd[1462]: time="2025-07-07T01:14:42.975244198Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" returns successfully" Jul 7 01:14:42.976641 containerd[1462]: time="2025-07-07T01:14:42.976609764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bz688,Uid:9b5c20f3-010e-455a-af88-ed3ca60a5bc4,Namespace:calico-system,Attempt:1,}" Jul 7 01:14:43.047413 containerd[1462]: time="2025-07-07T01:14:43.047185033Z" level=info msg="StartContainer for \"dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85\" returns successfully" Jul 7 01:14:43.363503 systemd-networkd[1370]: cali1a6a86b45ab: Link UP Jul 7 01:14:43.367225 systemd-networkd[1370]: cali1a6a86b45ab: Gained carrier Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.134 [INFO][5245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0 csi-node-driver- calico-system 9b5c20f3-010e-455a-af88-ed3ca60a5bc4 1051 0 2025-07-07 01:13:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-0-2961e92ed0.novalocal csi-node-driver-bz688 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a6a86b45ab [] [] }} ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.135 [INFO][5245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.229 [INFO][5258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" HandleID="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.230 [INFO][5258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" HandleID="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-0-2961e92ed0.novalocal", "pod":"csi-node-driver-bz688", "timestamp":"2025-07-07 01:14:43.228064653 +0000 UTC"}, Hostname:"ci-4081-3-4-0-2961e92ed0.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.231 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.231 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.232 [INFO][5258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-0-2961e92ed0.novalocal' Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.272 [INFO][5258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.288 [INFO][5258] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.301 [INFO][5258] ipam/ipam.go 511: Trying affinity for 192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.309 [INFO][5258] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.319 [INFO][5258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.128/26 host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.319 [INFO][5258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.128/26 handle="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.322 [INFO][5258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30 Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.335 [INFO][5258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.128/26 handle="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.352 [INFO][5258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.136/26] block=192.168.99.128/26 handle="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.353 [INFO][5258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.136/26] handle="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" host="ci-4081-3-4-0-2961e92ed0.novalocal" Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.353 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:14:43.411877 containerd[1462]: 2025-07-07 01:14:43.353 [INFO][5258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.136/26] IPv6=[] ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" HandleID="k8s-pod-network.0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.357 [INFO][5245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b5c20f3-010e-455a-af88-ed3ca60a5bc4", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"", Pod:"csi-node-driver-bz688", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6a86b45ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.358 [INFO][5245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.136/32] ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.358 [INFO][5245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a6a86b45ab ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.368 [INFO][5245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.369 [INFO][5245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b5c20f3-010e-455a-af88-ed3ca60a5bc4", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30", Pod:"csi-node-driver-bz688", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6a86b45ab", MAC:"d6:77:e2:80:84:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:14:43.412780 containerd[1462]: 2025-07-07 01:14:43.403 [INFO][5245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30" Namespace="calico-system" Pod="csi-node-driver-bz688" WorkloadEndpoint="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:14:43.469205 systemd[1]: run-containerd-runc-k8s.io-dbb58819b40b274367e012adbb3a567eb971ff7c9c376aeec79f562cb9f9eb85-runc.nUMjSk.mount: Deactivated successfully. Jul 7 01:14:43.469356 systemd[1]: run-netns-cni\x2d8c134a25\x2dce6a\x2dd66f\x2da646\x2daf93745ec470.mount: Deactivated successfully. Jul 7 01:14:43.522764 containerd[1462]: time="2025-07-07T01:14:43.520199140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:14:43.522764 containerd[1462]: time="2025-07-07T01:14:43.520277116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:14:43.522764 containerd[1462]: time="2025-07-07T01:14:43.520311982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:43.522764 containerd[1462]: time="2025-07-07T01:14:43.520419724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:14:43.611420 systemd[1]: Started cri-containerd-0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30.scope - libcontainer container 0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30. Jul 7 01:14:43.749635 containerd[1462]: time="2025-07-07T01:14:43.749516899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bz688,Uid:9b5c20f3-010e-455a-af88-ed3ca60a5bc4,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30\"" Jul 7 01:14:43.796258 kubelet[2614]: I0707 01:14:43.795997 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-br5th" podStartSLOduration=66.795978685 podStartE2EDuration="1m6.795978685s" podCreationTimestamp="2025-07-07 01:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:14:43.794518461 +0000 UTC m=+72.412258920" watchObservedRunningTime="2025-07-07 01:14:43.795978685 +0000 UTC m=+72.413719124" Jul 7 01:14:43.889018 systemd-networkd[1370]: califbb9adba274: Gained IPv6LL Jul 7 01:14:44.430406 containerd[1462]: time="2025-07-07T01:14:44.430328854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:44.432251 containerd[1462]: time="2025-07-07T01:14:44.432135269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 01:14:44.433762 containerd[1462]: time="2025-07-07T01:14:44.433717592Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:44.439648 containerd[1462]: time="2025-07-07T01:14:44.439585187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:44.441253 containerd[1462]: time="2025-07-07T01:14:44.440552925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.612040065s" Jul 7 01:14:44.441253 containerd[1462]: time="2025-07-07T01:14:44.440606866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 01:14:44.443020 containerd[1462]: time="2025-07-07T01:14:44.442990546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 01:14:44.466841 containerd[1462]: time="2025-07-07T01:14:44.466783502Z" level=info msg="CreateContainer within sandbox \"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 01:14:44.506359 containerd[1462]: time="2025-07-07T01:14:44.506302939Z" level=info msg="CreateContainer within sandbox \"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df\"" Jul 7 01:14:44.507436 containerd[1462]: time="2025-07-07T01:14:44.507373631Z" level=info msg="StartContainer for \"c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df\"" Jul 7 01:14:44.557098 systemd[1]: Started cri-containerd-c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df.scope - libcontainer container c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df. Jul 7 01:14:44.648209 containerd[1462]: time="2025-07-07T01:14:44.648084657Z" level=info msg="StartContainer for \"c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df\" returns successfully" Jul 7 01:14:44.912372 kubelet[2614]: I0707 01:14:44.912299 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67d8445464-5nr6m" podStartSLOduration=42.796961716 podStartE2EDuration="49.912281862s" podCreationTimestamp="2025-07-07 01:13:55 +0000 UTC" firstStartedPulling="2025-07-07 01:14:37.326437672 +0000 UTC m=+65.944178102" lastFinishedPulling="2025-07-07 01:14:44.441757809 +0000 UTC m=+73.059498248" observedRunningTime="2025-07-07 01:14:44.818003323 +0000 UTC m=+73.435743752" watchObservedRunningTime="2025-07-07 01:14:44.912281862 +0000 UTC m=+73.530022301" Jul 7 01:14:44.976140 systemd-networkd[1370]: cali1a6a86b45ab: Gained IPv6LL Jul 7 01:14:46.645918 kubelet[2614]: I0707 01:14:46.644787 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:14:49.067064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715650372.mount: Deactivated successfully. Jul 7 01:14:49.409990 containerd[1462]: time="2025-07-07T01:14:49.408804694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:49.412262 containerd[1462]: time="2025-07-07T01:14:49.412141262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 01:14:49.413083 containerd[1462]: time="2025-07-07T01:14:49.412960061Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:49.420320 containerd[1462]: time="2025-07-07T01:14:49.420202887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:49.423487 containerd[1462]: time="2025-07-07T01:14:49.423131679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.979681619s" Jul 7 01:14:49.423487 containerd[1462]: time="2025-07-07T01:14:49.423240453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 01:14:49.430251 containerd[1462]: time="2025-07-07T01:14:49.430185410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 01:14:49.438648 containerd[1462]: time="2025-07-07T01:14:49.438573867Z" level=info msg="CreateContainer within sandbox \"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 01:14:49.482175 containerd[1462]: time="2025-07-07T01:14:49.481928051Z" level=info msg="CreateContainer within sandbox \"8d4930ed18b70ffdab8850ac5a5554fd9a4b7e9363aa3def8b34eaabd1794c6a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a25e72acee5e65d1de64a9ac8a7c25053893474bb496afcb7502b3f2710f0265\"" Jul 7 01:14:49.485218 containerd[1462]: time="2025-07-07T01:14:49.484586675Z" level=info msg="StartContainer for \"a25e72acee5e65d1de64a9ac8a7c25053893474bb496afcb7502b3f2710f0265\"" Jul 7 01:14:49.570047 systemd[1]: Started cri-containerd-a25e72acee5e65d1de64a9ac8a7c25053893474bb496afcb7502b3f2710f0265.scope - libcontainer container a25e72acee5e65d1de64a9ac8a7c25053893474bb496afcb7502b3f2710f0265. Jul 7 01:14:49.640007 containerd[1462]: time="2025-07-07T01:14:49.639840699Z" level=info msg="StartContainer for \"a25e72acee5e65d1de64a9ac8a7c25053893474bb496afcb7502b3f2710f0265\" returns successfully" Jul 7 01:14:49.826362 kubelet[2614]: I0707 01:14:49.826039 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9589d579b-t8m2k" podStartSLOduration=2.399168204 podStartE2EDuration="20.825929468s" podCreationTimestamp="2025-07-07 01:14:29 +0000 UTC" firstStartedPulling="2025-07-07 01:14:31.000686582 +0000 UTC m=+59.618427022" lastFinishedPulling="2025-07-07 01:14:49.427447797 +0000 UTC m=+78.045188286" observedRunningTime="2025-07-07 01:14:49.822559938 +0000 UTC m=+78.440300377" watchObservedRunningTime="2025-07-07 01:14:49.825929468 +0000 UTC m=+78.443669947" Jul 7 01:14:53.601016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186132279.mount: Deactivated successfully. Jul 7 01:14:54.489417 containerd[1462]: time="2025-07-07T01:14:54.489352724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:54.491295 containerd[1462]: time="2025-07-07T01:14:54.491048648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 01:14:54.492840 containerd[1462]: time="2025-07-07T01:14:54.492510855Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:54.496084 containerd[1462]: time="2025-07-07T01:14:54.496034684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:54.496986 containerd[1462]: time="2025-07-07T01:14:54.496938861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.066686847s" Jul 7 01:14:54.497068 containerd[1462]: time="2025-07-07T01:14:54.496986841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 01:14:54.501174 containerd[1462]: time="2025-07-07T01:14:54.500440168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 01:14:54.507737 containerd[1462]: time="2025-07-07T01:14:54.507441016Z" level=info msg="CreateContainer within sandbox \"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 01:14:54.529722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744553187.mount: Deactivated successfully. Jul 7 01:14:54.539276 containerd[1462]: time="2025-07-07T01:14:54.539230714Z" level=info msg="CreateContainer within sandbox \"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d\"" Jul 7 01:14:54.545879 containerd[1462]: time="2025-07-07T01:14:54.545387958Z" level=info msg="StartContainer for \"e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d\"" Jul 7 01:14:54.613042 systemd[1]: Started cri-containerd-e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d.scope - libcontainer container e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d. Jul 7 01:14:54.699262 containerd[1462]: time="2025-07-07T01:14:54.699209835Z" level=info msg="StartContainer for \"e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d\" returns successfully" Jul 7 01:14:54.952059 kubelet[2614]: I0707 01:14:54.951986 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-fn6sw" podStartSLOduration=45.713893386 podStartE2EDuration="1m0.951965456s" podCreationTimestamp="2025-07-07 01:13:54 +0000 UTC" firstStartedPulling="2025-07-07 01:14:39.260767311 +0000 UTC m=+67.878507750" lastFinishedPulling="2025-07-07 01:14:54.498839391 +0000 UTC m=+83.116579820" observedRunningTime="2025-07-07 01:14:54.951424089 +0000 UTC m=+83.569164548" watchObservedRunningTime="2025-07-07 01:14:54.951965456 +0000 UTC m=+83.569705885" Jul 7 01:14:55.942486 systemd[1]: run-containerd-runc-k8s.io-e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d-runc.Mif71r.mount: Deactivated successfully. Jul 7 01:14:57.909199 containerd[1462]: time="2025-07-07T01:14:57.908972178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:57.911383 containerd[1462]: time="2025-07-07T01:14:57.911020525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 01:14:57.913060 containerd[1462]: time="2025-07-07T01:14:57.913021363Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:57.918685 containerd[1462]: time="2025-07-07T01:14:57.918272274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:14:57.919567 containerd[1462]: time="2025-07-07T01:14:57.919511741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.419030948s" Jul 7 01:14:57.919762 containerd[1462]: time="2025-07-07T01:14:57.919694565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 01:14:57.931020 containerd[1462]: time="2025-07-07T01:14:57.930960683Z" level=info msg="CreateContainer within sandbox \"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 01:14:58.026013 containerd[1462]: time="2025-07-07T01:14:58.025808875Z" level=info msg="CreateContainer within sandbox \"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a4964952a24dc4be56f5307c765e67df691b8ffbe0994d6fbd9aca67f82ecc3a\"" Jul 7 01:14:58.029691 containerd[1462]: time="2025-07-07T01:14:58.029632896Z" level=info msg="StartContainer for \"a4964952a24dc4be56f5307c765e67df691b8ffbe0994d6fbd9aca67f82ecc3a\"" Jul 7 01:14:58.114149 systemd[1]: Started cri-containerd-a4964952a24dc4be56f5307c765e67df691b8ffbe0994d6fbd9aca67f82ecc3a.scope - libcontainer container a4964952a24dc4be56f5307c765e67df691b8ffbe0994d6fbd9aca67f82ecc3a. Jul 7 01:14:58.181799 containerd[1462]: time="2025-07-07T01:14:58.181576185Z" level=info msg="StartContainer for \"a4964952a24dc4be56f5307c765e67df691b8ffbe0994d6fbd9aca67f82ecc3a\" returns successfully" Jul 7 01:14:58.184762 containerd[1462]: time="2025-07-07T01:14:58.184561561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 01:15:00.715938 containerd[1462]: time="2025-07-07T01:15:00.715420472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:15:00.717235 containerd[1462]: time="2025-07-07T01:15:00.717164697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 01:15:00.718055 containerd[1462]: time="2025-07-07T01:15:00.718016727Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:15:00.722263 containerd[1462]: time="2025-07-07T01:15:00.722212296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:15:00.723453 containerd[1462]: time="2025-07-07T01:15:00.723307102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.538651293s" Jul 7 01:15:00.723453 containerd[1462]: time="2025-07-07T01:15:00.723356214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 01:15:00.738201 containerd[1462]: time="2025-07-07T01:15:00.738115910Z" level=info msg="CreateContainer within sandbox \"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 01:15:00.765781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112261686.mount: Deactivated successfully. Jul 7 01:15:00.769799 containerd[1462]: time="2025-07-07T01:15:00.769640588Z" level=info msg="CreateContainer within sandbox \"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6ad87101ae843a242e8603d86872ee7a6457ec4bc1e63902d087ef73763641c3\"" Jul 7 01:15:00.771928 containerd[1462]: time="2025-07-07T01:15:00.770906805Z" level=info msg="StartContainer for \"6ad87101ae843a242e8603d86872ee7a6457ec4bc1e63902d087ef73763641c3\"" Jul 7 01:15:00.830060 systemd[1]: Started cri-containerd-6ad87101ae843a242e8603d86872ee7a6457ec4bc1e63902d087ef73763641c3.scope - libcontainer container 6ad87101ae843a242e8603d86872ee7a6457ec4bc1e63902d087ef73763641c3. Jul 7 01:15:00.884169 containerd[1462]: time="2025-07-07T01:15:00.883990946Z" level=info msg="StartContainer for \"6ad87101ae843a242e8603d86872ee7a6457ec4bc1e63902d087ef73763641c3\" returns successfully" Jul 7 01:15:00.932156 kubelet[2614]: I0707 01:15:00.932010 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bz688" podStartSLOduration=48.9597872 podStartE2EDuration="1m5.93192628s" podCreationTimestamp="2025-07-07 01:13:55 +0000 UTC" firstStartedPulling="2025-07-07 01:14:43.752518989 +0000 UTC m=+72.370259418" lastFinishedPulling="2025-07-07 01:15:00.724658069 +0000 UTC m=+89.342398498" observedRunningTime="2025-07-07 01:15:00.9295732 +0000 UTC m=+89.547313650" watchObservedRunningTime="2025-07-07 01:15:00.93192628 +0000 UTC m=+89.549666709" Jul 7 01:15:01.059304 kubelet[2614]: I0707 01:15:01.058374 2614 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 01:15:01.061760 kubelet[2614]: I0707 01:15:01.061389 2614 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 01:15:16.106195 systemd[1]: Started sshd@9-172.24.4.54:22-172.24.4.1:47296.service - OpenSSH per-connection server daemon (172.24.4.1:47296). Jul 7 01:15:17.261689 sshd[5720]: Accepted publickey for core from 172.24.4.1 port 47296 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:15:17.265187 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:15:17.274072 systemd-logind[1444]: New session 12 of user core. Jul 7 01:15:17.280000 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 01:15:18.861336 sshd[5720]: pam_unix(sshd:session): session closed for user core Jul 7 01:15:18.880990 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jul 7 01:15:18.882819 systemd[1]: sshd@9-172.24.4.54:22-172.24.4.1:47296.service: Deactivated successfully. Jul 7 01:15:18.892212 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 01:15:18.901107 systemd-logind[1444]: Removed session 12. Jul 7 01:15:23.343252 systemd[1]: Started sshd@10-172.24.4.54:22-172.24.4.1:47308.service - OpenSSH per-connection server daemon (172.24.4.1:47308). Jul 7 01:15:24.667171 sshd[5734]: Accepted publickey for core from 172.24.4.1 port 47308 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:15:24.688655 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:15:24.704839 systemd-logind[1444]: New session 13 of user core. Jul 7 01:15:24.709080 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 01:15:25.473312 sshd[5734]: pam_unix(sshd:session): session closed for user core Jul 7 01:15:25.484717 systemd[1]: sshd@10-172.24.4.54:22-172.24.4.1:47308.service: Deactivated successfully. Jul 7 01:15:25.491183 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 01:15:25.496488 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jul 7 01:15:25.500989 systemd-logind[1444]: Removed session 13. Jul 7 01:15:25.946037 systemd[1]: run-containerd-runc-k8s.io-e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d-runc.EpWfdT.mount: Deactivated successfully. Jul 7 01:15:30.936615 systemd[1]: Started sshd@11-172.24.4.54:22-172.24.4.1:48498.service - OpenSSH per-connection server daemon (172.24.4.1:48498). Jul 7 01:15:32.321527 sshd[5794]: Accepted publickey for core from 172.24.4.1 port 48498 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:15:32.326801 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:15:32.343192 systemd-logind[1444]: New session 14 of user core. Jul 7 01:15:32.352134 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 01:15:32.419213 containerd[1462]: time="2025-07-07T01:15:32.419139677Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.551 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4af0965-443f-43ce-a1ac-716ddc78ed1f", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af", Pod:"goldmane-768f4c5c69-fn6sw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddaab354699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.553 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.553 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" iface="eth0" netns="" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.553 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.553 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.623 [INFO][5815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.623 [INFO][5815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.623 [INFO][5815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.635 [WARNING][5815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.635 [INFO][5815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.637 [INFO][5815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:32.643840 containerd[1462]: 2025-07-07 01:15:32.642 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.645311 containerd[1462]: time="2025-07-07T01:15:32.644630279Z" level=info msg="TearDown network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" successfully" Jul 7 01:15:32.645311 containerd[1462]: time="2025-07-07T01:15:32.644689541Z" level=info msg="StopPodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" returns successfully" Jul 7 01:15:32.645920 containerd[1462]: time="2025-07-07T01:15:32.645653219Z" level=info msg="RemovePodSandbox for \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:15:32.645920 containerd[1462]: time="2025-07-07T01:15:32.645691380Z" level=info msg="Forcibly stopping sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\"" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.690 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4af0965-443f-43ce-a1ac-716ddc78ed1f", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"c4f7dd730b08d1e2c17b2733f7477a538398bd74f6d51b81ddf0293e3cbb38af", Pod:"goldmane-768f4c5c69-fn6sw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliddaab354699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.691 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.691 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" iface="eth0" netns="" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.691 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.691 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.730 [INFO][5837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.731 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.731 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.744 [WARNING][5837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.744 [INFO][5837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" HandleID="k8s-pod-network.199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-goldmane--768f4c5c69--fn6sw-eth0" Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.749 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:32.756044 containerd[1462]: 2025-07-07 01:15:32.753 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4" Jul 7 01:15:32.758488 containerd[1462]: time="2025-07-07T01:15:32.755995766Z" level=info msg="TearDown network for sandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" successfully" Jul 7 01:15:32.773974 containerd[1462]: time="2025-07-07T01:15:32.773566381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:15:32.773974 containerd[1462]: time="2025-07-07T01:15:32.773911959Z" level=info msg="RemovePodSandbox \"199fd83ded38a08722f7c481b9f6f6c19c08438faa56beca53a57abac96632d4\" returns successfully" Jul 7 01:15:32.776895 containerd[1462]: time="2025-07-07T01:15:32.776550130Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.866 [WARNING][5858] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2314dc80-e996-40d7-ac0d-8b41b48a019a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e", Pod:"calico-apiserver-797f4f9b9c-srqgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a784b29338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.867 [INFO][5858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.867 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" iface="eth0" netns="" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.867 [INFO][5858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.867 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.941 [INFO][5866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.942 [INFO][5866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.942 [INFO][5866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.966 [WARNING][5866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.966 [INFO][5866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.973 [INFO][5866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:32.980000 containerd[1462]: 2025-07-07 01:15:32.975 [INFO][5858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:32.982506 containerd[1462]: time="2025-07-07T01:15:32.981994151Z" level=info msg="TearDown network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" successfully" Jul 7 01:15:32.982506 containerd[1462]: time="2025-07-07T01:15:32.982033335Z" level=info msg="StopPodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" returns successfully" Jul 7 01:15:32.982848 containerd[1462]: time="2025-07-07T01:15:32.982612090Z" level=info msg="RemovePodSandbox for \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:15:32.982848 containerd[1462]: time="2025-07-07T01:15:32.982685007Z" level=info msg="Forcibly stopping sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\"" Jul 7 01:15:33.122526 sshd[5794]: pam_unix(sshd:session): session closed for user core Jul 7 01:15:33.133454 systemd[1]: sshd@11-172.24.4.54:22-172.24.4.1:48498.service: Deactivated successfully. Jul 7 01:15:33.139586 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 01:15:33.141542 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jul 7 01:15:33.145333 systemd-logind[1444]: Removed session 14. Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.080 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0", GenerateName:"calico-apiserver-797f4f9b9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2314dc80-e996-40d7-ac0d-8b41b48a019a", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f4f9b9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"2941bd4686cbf55d4e24a37ea8a52b341f9c01daf3b686aa258ed96495d0953e", Pod:"calico-apiserver-797f4f9b9c-srqgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a784b29338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.080 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.080 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" iface="eth0" netns="" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.080 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.080 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.118 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.119 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.119 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.137 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.137 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" HandleID="k8s-pod-network.b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--apiserver--797f4f9b9c--srqgn-eth0" Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.141 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.148438 containerd[1462]: 2025-07-07 01:15:33.146 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699" Jul 7 01:15:33.150256 containerd[1462]: time="2025-07-07T01:15:33.148497570Z" level=info msg="TearDown network for sandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" successfully" Jul 7 01:15:33.213668 containerd[1462]: time="2025-07-07T01:15:33.213524464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:15:33.214589 containerd[1462]: time="2025-07-07T01:15:33.213690956Z" level=info msg="RemovePodSandbox \"b089b2e446c0e8aecda868994dd8b26ec97ff03f4a0990778fdc09e18643c699\" returns successfully" Jul 7 01:15:33.216922 containerd[1462]: time="2025-07-07T01:15:33.216664206Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.281 [WARNING][5903] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0", GenerateName:"calico-kube-controllers-67d8445464-", Namespace:"calico-system", SelfLink:"", UID:"01d5654a-06ca-4bae-ada4-ae75fded948d", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d8445464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e", Pod:"calico-kube-controllers-67d8445464-5nr6m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc97365cd2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.282 [INFO][5903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.282 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" iface="eth0" netns="" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.282 [INFO][5903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.282 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.328 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.328 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.328 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.340 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.340 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.342 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.348372 containerd[1462]: 2025-07-07 01:15:33.344 [INFO][5903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.348372 containerd[1462]: time="2025-07-07T01:15:33.346711098Z" level=info msg="TearDown network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" successfully" Jul 7 01:15:33.348372 containerd[1462]: time="2025-07-07T01:15:33.346744351Z" level=info msg="StopPodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" returns successfully" Jul 7 01:15:33.350486 containerd[1462]: time="2025-07-07T01:15:33.349625839Z" level=info msg="RemovePodSandbox for \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:15:33.350486 containerd[1462]: time="2025-07-07T01:15:33.349669080Z" level=info msg="Forcibly stopping sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\"" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.408 [WARNING][5924] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0", GenerateName:"calico-kube-controllers-67d8445464-", Namespace:"calico-system", SelfLink:"", UID:"01d5654a-06ca-4bae-ada4-ae75fded948d", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d8445464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"d7f1a371bddf791c2526bf5b4658131eff8028ccda4d3406d8a162f97016150e", Pod:"calico-kube-controllers-67d8445464-5nr6m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc97365cd2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.409 [INFO][5924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.409 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" iface="eth0" netns="" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.409 [INFO][5924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.409 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.448 [INFO][5932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.448 [INFO][5932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.450 [INFO][5932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.464 [WARNING][5932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.467 [INFO][5932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" HandleID="k8s-pod-network.1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-calico--kube--controllers--67d8445464--5nr6m-eth0" Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.474 [INFO][5932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.480063 containerd[1462]: 2025-07-07 01:15:33.477 [INFO][5924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209" Jul 7 01:15:33.480063 containerd[1462]: time="2025-07-07T01:15:33.479740138Z" level=info msg="TearDown network for sandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" successfully" Jul 7 01:15:33.486602 containerd[1462]: time="2025-07-07T01:15:33.486536092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:15:33.487110 containerd[1462]: time="2025-07-07T01:15:33.486643924Z" level=info msg="RemovePodSandbox \"1e2ab3e360fd56cd9c5cc42ec7670dc5bff1f715ca949591c73e7f817c926209\" returns successfully" Jul 7 01:15:33.487994 containerd[1462]: time="2025-07-07T01:15:33.487919447Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.545 [WARNING][5947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444d0803-585d-498b-a49e-969f9bbea4fc", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9", Pod:"coredns-674b8bbfcf-br5th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb9adba274", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.546 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.546 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" iface="eth0" netns="" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.546 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.546 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.578 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.578 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.579 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.589 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.589 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.591 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.595346 containerd[1462]: 2025-07-07 01:15:33.593 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.597600 containerd[1462]: time="2025-07-07T01:15:33.595400291Z" level=info msg="TearDown network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" successfully" Jul 7 01:15:33.597600 containerd[1462]: time="2025-07-07T01:15:33.595428424Z" level=info msg="StopPodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" returns successfully" Jul 7 01:15:33.597600 containerd[1462]: time="2025-07-07T01:15:33.597357495Z" level=info msg="RemovePodSandbox for \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:15:33.597600 containerd[1462]: time="2025-07-07T01:15:33.597397851Z" level=info msg="Forcibly stopping sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\"" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.664 [WARNING][5969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444d0803-585d-498b-a49e-969f9bbea4fc", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"37f5d63476a3820c9d51fc80f1791fc4b303dd89bd5895034f40c3f1c2abcfc9", Pod:"coredns-674b8bbfcf-br5th", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb9adba274", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.664 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.664 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" iface="eth0" netns="" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.664 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.664 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.698 [INFO][5976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.698 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.698 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.707 [WARNING][5976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.707 [INFO][5976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" HandleID="k8s-pod-network.8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-coredns--674b8bbfcf--br5th-eth0" Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.709 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.713183 containerd[1462]: 2025-07-07 01:15:33.710 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318" Jul 7 01:15:33.713183 containerd[1462]: time="2025-07-07T01:15:33.712057236Z" level=info msg="TearDown network for sandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" successfully" Jul 7 01:15:33.723459 containerd[1462]: time="2025-07-07T01:15:33.723396646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:15:33.723878 containerd[1462]: time="2025-07-07T01:15:33.723826684Z" level=info msg="RemovePodSandbox \"8721b729fe3137ba4f571b94c335ce5b4b3eb05a3ff40f1f9fcb4b428abf1318\" returns successfully" Jul 7 01:15:33.725161 containerd[1462]: time="2025-07-07T01:15:33.725113278Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.782 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b5c20f3-010e-455a-af88-ed3ca60a5bc4", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30", Pod:"csi-node-driver-bz688", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6a86b45ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.782 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.782 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" iface="eth0" netns="" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.782 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.782 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.810 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.811 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.811 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.821 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.821 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.823 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.829182 containerd[1462]: 2025-07-07 01:15:33.826 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.829182 containerd[1462]: time="2025-07-07T01:15:33.829097981Z" level=info msg="TearDown network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" successfully" Jul 7 01:15:33.830251 containerd[1462]: time="2025-07-07T01:15:33.830046000Z" level=info msg="StopPodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" returns successfully" Jul 7 01:15:33.830910 containerd[1462]: time="2025-07-07T01:15:33.830801007Z" level=info msg="RemovePodSandbox for \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:15:33.830910 containerd[1462]: time="2025-07-07T01:15:33.830839359Z" level=info msg="Forcibly stopping sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\"" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.893 [WARNING][6012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b5c20f3-010e-455a-af88-ed3ca60a5bc4", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-0-2961e92ed0.novalocal", ContainerID:"0c0dfb9c149a48a1fc060efcccd8cc3b2c0ebff80a2ab6a1648852d75a34df30", Pod:"csi-node-driver-bz688", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a6a86b45ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.899 [INFO][6012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.899 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" iface="eth0" netns="" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.899 [INFO][6012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.899 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.947 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.950 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.950 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.988 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.988 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" HandleID="k8s-pod-network.60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Workload="ci--4081--3--4--0--2961e92ed0.novalocal-k8s-csi--node--driver--bz688-eth0" Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.991 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:15:33.998031 containerd[1462]: 2025-07-07 01:15:33.993 [INFO][6012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2" Jul 7 01:15:33.998031 containerd[1462]: time="2025-07-07T01:15:33.997325125Z" level=info msg="TearDown network for sandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" successfully" Jul 7 01:15:34.006911 containerd[1462]: time="2025-07-07T01:15:34.006829561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:15:34.007085 containerd[1462]: time="2025-07-07T01:15:34.007045086Z" level=info msg="RemovePodSandbox \"60fc867e735eb855cdb0c7344f58ca9eb462cb80bc8f194653b6274069715aa2\" returns successfully" Jul 7 01:15:38.149284 systemd[1]: Started sshd@12-172.24.4.54:22-172.24.4.1:34530.service - OpenSSH per-connection server daemon (172.24.4.1:34530). Jul 7 01:15:39.749904 sshd[6029]: Accepted publickey for core from 172.24.4.1 port 34530 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:15:39.751713 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:15:39.761948 systemd-logind[1444]: New session 15 of user core. Jul 7 01:15:39.765045 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 01:15:54.355735 sshd[6029]: pam_unix(sshd:session): session closed for user core Jul 7 01:15:55.139817 systemd[1]: Started sshd@13-172.24.4.54:22-172.24.4.1:42648.service - OpenSSH per-connection server daemon (172.24.4.1:42648). Jul 7 01:15:55.156290 systemd[1]: sshd@12-172.24.4.54:22-172.24.4.1:34530.service: Deactivated successfully. Jul 7 01:15:55.158789 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 01:15:55.176227 systemd[1]: cri-containerd-4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66.scope: Deactivated successfully. Jul 7 01:15:55.176626 systemd[1]: cri-containerd-4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66.scope: Consumed 5.027s CPU time, 17.6M memory peak, 0B memory swap peak. Jul 7 01:15:55.188057 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jul 7 01:15:55.201140 systemd-logind[1444]: Removed session 15. Jul 7 01:15:55.255594 systemd[1]: cri-containerd-64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11.scope: Deactivated successfully. Jul 7 01:15:55.255933 systemd[1]: cri-containerd-64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11.scope: Consumed 14.812s CPU time. Jul 7 01:15:56.505387 kubelet[2614]: E0707 01:15:56.505281 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.89s" Jul 7 01:15:56.538784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66-rootfs.mount: Deactivated successfully. Jul 7 01:15:56.542457 containerd[1462]: time="2025-07-07T01:15:56.541994247Z" level=info msg="shim disconnected" id=4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66 namespace=k8s.io Jul 7 01:15:56.542457 containerd[1462]: time="2025-07-07T01:15:56.542295261Z" level=warning msg="cleaning up after shim disconnected" id=4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66 namespace=k8s.io Jul 7 01:15:56.542457 containerd[1462]: time="2025-07-07T01:15:56.542351386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:15:56.563122 containerd[1462]: time="2025-07-07T01:15:56.563013665Z" level=info msg="shim disconnected" id=64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11 namespace=k8s.io Jul 7 01:15:56.563122 containerd[1462]: time="2025-07-07T01:15:56.563108424Z" level=warning msg="cleaning up after shim disconnected" id=64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11 namespace=k8s.io Jul 7 01:15:56.563122 containerd[1462]: time="2025-07-07T01:15:56.563122731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:15:56.567703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11-rootfs.mount: Deactivated successfully. Jul 7 01:15:56.636557 systemd[1]: run-containerd-runc-k8s.io-e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d-runc.aHL7d6.mount: Deactivated successfully. Jul 7 01:15:57.157850 systemd[1]: cri-containerd-9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7.scope: Deactivated successfully. Jul 7 01:15:57.159215 systemd[1]: cri-containerd-9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7.scope: Consumed 3.931s CPU time, 15.6M memory peak, 0B memory swap peak. Jul 7 01:15:57.226102 containerd[1462]: time="2025-07-07T01:15:57.225979895Z" level=info msg="shim disconnected" id=9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7 namespace=k8s.io Jul 7 01:15:57.226102 containerd[1462]: time="2025-07-07T01:15:57.226056329Z" level=warning msg="cleaning up after shim disconnected" id=9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7 namespace=k8s.io Jul 7 01:15:57.226102 containerd[1462]: time="2025-07-07T01:15:57.226076166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:15:57.381784 containerd[1462]: time="2025-07-07T01:15:57.248880154Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:15:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:15:57.528118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7-rootfs.mount: Deactivated successfully. Jul 7 01:15:58.517680 kubelet[2614]: I0707 01:15:58.517620 2614 scope.go:117] "RemoveContainer" containerID="4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66" Jul 7 01:15:58.528935 kubelet[2614]: I0707 01:15:58.525185 2614 scope.go:117] "RemoveContainer" containerID="9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7" Jul 7 01:15:58.534753 containerd[1462]: time="2025-07-07T01:15:58.534493879Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 01:15:58.548476 containerd[1462]: time="2025-07-07T01:15:58.543368862Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 01:15:58.548631 kubelet[2614]: I0707 01:15:58.537261 2614 scope.go:117] "RemoveContainer" containerID="64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11" Jul 7 01:15:58.563495 containerd[1462]: time="2025-07-07T01:15:58.563394025Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 01:15:58.639292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378889799.mount: Deactivated successfully. Jul 7 01:15:58.679221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617262191.mount: Deactivated successfully. Jul 7 01:15:58.721891 containerd[1462]: time="2025-07-07T01:15:58.721797469Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13\"" Jul 7 01:15:58.723692 containerd[1462]: time="2025-07-07T01:15:58.723270292Z" level=info msg="StartContainer for \"9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13\"" Jul 7 01:15:58.761116 systemd[1]: Started cri-containerd-9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13.scope - libcontainer container 9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13. Jul 7 01:15:58.774236 containerd[1462]: time="2025-07-07T01:15:58.772693716Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\"" Jul 7 01:15:58.776271 containerd[1462]: time="2025-07-07T01:15:58.775481647Z" level=info msg="StartContainer for \"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\"" Jul 7 01:15:58.815745 containerd[1462]: time="2025-07-07T01:15:58.815698345Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\"" Jul 7 01:15:58.818446 containerd[1462]: time="2025-07-07T01:15:58.818372664Z" level=info msg="StartContainer for \"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\"" Jul 7 01:15:58.827115 systemd[1]: Started cri-containerd-0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e.scope - libcontainer container 0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e. Jul 7 01:15:58.876674 containerd[1462]: time="2025-07-07T01:15:58.874617616Z" level=info msg="StartContainer for \"9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13\" returns successfully" Jul 7 01:15:58.884121 systemd[1]: Started cri-containerd-45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834.scope - libcontainer container 45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834. Jul 7 01:15:58.920652 containerd[1462]: time="2025-07-07T01:15:58.920598240Z" level=info msg="StartContainer for \"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" returns successfully" Jul 7 01:15:58.926953 sshd[6041]: Accepted publickey for core from 172.24.4.1 port 42648 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:15:58.927298 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:15:58.937254 systemd-logind[1444]: New session 16 of user core. Jul 7 01:15:58.942406 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 01:15:58.994548 containerd[1462]: time="2025-07-07T01:15:58.994484774Z" level=info msg="StartContainer for \"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" returns successfully" Jul 7 01:15:59.922807 sshd[6041]: pam_unix(sshd:session): session closed for user core Jul 7 01:15:59.931507 systemd[1]: sshd@13-172.24.4.54:22-172.24.4.1:42648.service: Deactivated successfully. Jul 7 01:15:59.934942 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 01:15:59.938277 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jul 7 01:15:59.944962 systemd[1]: Started sshd@14-172.24.4.54:22-172.24.4.1:42660.service - OpenSSH per-connection server daemon (172.24.4.1:42660). Jul 7 01:15:59.947029 systemd-logind[1444]: Removed session 16. Jul 7 01:16:01.559643 sshd[6321]: Accepted publickey for core from 172.24.4.1 port 42660 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:16:01.567827 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:16:01.584039 systemd-logind[1444]: New session 17 of user core. Jul 7 01:16:01.594045 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 01:16:02.488847 sshd[6321]: pam_unix(sshd:session): session closed for user core Jul 7 01:16:02.502968 systemd[1]: sshd@14-172.24.4.54:22-172.24.4.1:42660.service: Deactivated successfully. Jul 7 01:16:02.507717 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 01:16:02.517716 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jul 7 01:16:02.531191 systemd[1]: Started sshd@15-172.24.4.54:22-172.24.4.1:42672.service - OpenSSH per-connection server daemon (172.24.4.1:42672). Jul 7 01:16:02.538610 systemd-logind[1444]: Removed session 17. Jul 7 01:16:09.652397 systemd[1]: cri-containerd-0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e.scope: Deactivated successfully. Jul 7 01:16:21.586196 kubelet[2614]: E0707 01:16:15.898172 2614 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal.184fd32c2f299e37 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal,UID:f1b00cfc6c85dfe639649a5e83ae72a3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-2961e92ed0.novalocal,},FirstTimestamp:2025-07-07 01:16:06.793690679 +0000 UTC m=+155.411431119,LastTimestamp:2025-07-07 01:16:06.793690679 +0000 UTC m=+155.411431119,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-2961e92ed0.novalocal,}" Jul 7 01:16:21.586196 kubelet[2614]: E0707 01:16:21.560024 2614 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 7 01:16:21.586196 kubelet[2614]: E0707 01:16:21.571333 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.953s" Jul 7 01:16:21.589549 containerd[1462]: time="2025-07-07T01:16:21.538237100Z" level=error msg="failed to handle container TaskExit event container_id:\"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" id:\"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" pid:6229 exit_status:1 exited_at:{seconds:1751850969 nanos:657887125}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Jul 7 01:16:10.726265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e-rootfs.mount: Deactivated successfully. Jul 7 01:16:10.796688 systemd[1]: run-containerd-runc-k8s.io-e425578ac9cf0084d90b7120b66e51e3517c94f75301d4a4b3369ee4d27fd03d-runc.12Qhe7.mount: Deactivated successfully. Jul 7 01:16:18.932225 systemd[1]: cri-containerd-9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13.scope: Deactivated successfully. Jul 7 01:16:18.934552 systemd[1]: cri-containerd-9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13.scope: Consumed 2.490s CPU time. Jul 7 01:16:21.611368 kubelet[2614]: E0707 01:16:21.610170 2614 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-0-2961e92ed0.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 01:16:21.655606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13-rootfs.mount: Deactivated successfully. Jul 7 01:16:22.411200 containerd[1462]: time="2025-07-07T01:16:22.410813133Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Jul 7 01:16:22.413420 containerd[1462]: time="2025-07-07T01:16:22.412820669Z" level=info msg="shim disconnected" id=9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13 namespace=k8s.io Jul 7 01:16:22.413420 containerd[1462]: time="2025-07-07T01:16:22.413165297Z" level=warning msg="cleaning up after shim disconnected" id=9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13 namespace=k8s.io Jul 7 01:16:22.413420 containerd[1462]: time="2025-07-07T01:16:22.413352347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:16:22.853584 kubelet[2614]: I0707 01:16:22.852831 2614 scope.go:117] "RemoveContainer" containerID="9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7" Jul 7 01:16:22.854609 kubelet[2614]: I0707 01:16:22.854365 2614 scope.go:117] "RemoveContainer" containerID="9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13" Jul 7 01:16:22.856991 kubelet[2614]: E0707 01:16:22.854814 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(502e76a70bd5eb6e3bcf0fcb81811131)\"" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="502e76a70bd5eb6e3bcf0fcb81811131" Jul 7 01:16:22.869346 containerd[1462]: time="2025-07-07T01:16:22.867846840Z" level=info msg="RemoveContainer for \"9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7\"" Jul 7 01:16:22.918056 containerd[1462]: time="2025-07-07T01:16:22.917920677Z" level=info msg="RemoveContainer for \"9e1fe554713a66fa64eab223babd63af6934ee994d90d3a4bb155c9c83eb59a7\" returns successfully" Jul 7 01:16:23.210403 sshd[6341]: Accepted publickey for core from 172.24.4.1 port 42672 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:16:23.215552 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:16:23.236307 systemd-logind[1444]: New session 18 of user core. Jul 7 01:16:23.248948 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 01:16:23.488564 containerd[1462]: time="2025-07-07T01:16:23.488340543Z" level=info msg="TaskExit event container_id:\"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" id:\"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" pid:6229 exit_status:1 exited_at:{seconds:1751850969 nanos:657887125}" Jul 7 01:16:23.490740 containerd[1462]: time="2025-07-07T01:16:23.490668151Z" level=info msg="shim disconnected" id=0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e namespace=k8s.io Jul 7 01:16:23.490740 containerd[1462]: time="2025-07-07T01:16:23.490721481Z" level=warning msg="cleaning up after shim disconnected" id=0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e namespace=k8s.io Jul 7 01:16:23.490740 containerd[1462]: time="2025-07-07T01:16:23.490736419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:16:23.530677 containerd[1462]: time="2025-07-07T01:16:23.530587646Z" level=info msg="Ensure that container 0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e in task-service has been cleanup successfully" Jul 7 01:16:23.963181 kubelet[2614]: I0707 01:16:23.962228 2614 scope.go:117] "RemoveContainer" containerID="64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11" Jul 7 01:16:23.965411 kubelet[2614]: I0707 01:16:23.963696 2614 scope.go:117] "RemoveContainer" containerID="0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e" Jul 7 01:16:23.970005 containerd[1462]: time="2025-07-07T01:16:23.969411457Z" level=info msg="RemoveContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\"" Jul 7 01:16:23.974562 containerd[1462]: time="2025-07-07T01:16:23.974162280Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 7 01:16:25.655496 kubelet[2614]: I0707 01:16:25.655391 2614 scope.go:117] "RemoveContainer" containerID="9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13" Jul 7 01:16:25.656677 kubelet[2614]: E0707 01:16:25.656043 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(502e76a70bd5eb6e3bcf0fcb81811131)\"" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="502e76a70bd5eb6e3bcf0fcb81811131" Jul 7 01:16:34.011809 kubelet[2614]: I0707 01:16:34.011569 2614 scope.go:117] "RemoveContainer" containerID="64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11" Jul 7 01:16:59.464524 kubelet[2614]: I0707 01:16:36.615468 2614 scope.go:117] "RemoveContainer" containerID="9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13" Jul 7 01:16:59.464524 kubelet[2614]: E0707 01:16:59.336730 2614 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": context deadline exceeded" Jul 7 01:16:59.472779 containerd[1462]: time="2025-07-07T01:16:59.457822542Z" level=error msg="failed to handle container TaskExit event container_id:\"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" id:\"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" pid:6265 exit_status:1 exited_at:{seconds:1751850995 nanos:48069405}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Jul 7 01:16:59.473498 update_engine[1445]: I20250707 01:16:54.871115 1445 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 01:16:59.473498 update_engine[1445]: I20250707 01:16:54.871781 1445 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 01:16:59.473498 update_engine[1445]: I20250707 01:16:54.875768 1445 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 01:16:59.473498 update_engine[1445]: I20250707 01:16:59.461671 1445 omaha_request_params.cc:62] Current group set to lts Jul 7 01:16:35.040456 systemd[1]: cri-containerd-45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834.scope: Deactivated successfully. Jul 7 01:16:35.041068 systemd[1]: cri-containerd-45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834.scope: Consumed 3.366s CPU time. Jul 7 01:16:59.532279 kubelet[2614]: I0707 01:16:59.529473 2614 status_manager.go:895] "Failed to get status for pod" podUID="2980795e-09f7-4095-957d-e01c74f573a0" pod="tigera-operator/tigera-operator-747864d56d-ksmf8" err="etcdserver: request timed out" Jul 7 01:16:59.570777 update_engine[1445]: I20250707 01:16:59.531961 1445 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 01:16:59.570777 update_engine[1445]: I20250707 01:16:59.531997 1445 update_attempter.cc:643] Scheduling an action processor start. Jul 7 01:16:59.570777 update_engine[1445]: I20250707 01:16:59.532079 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 01:16:59.570777 update_engine[1445]: I20250707 01:16:59.532215 1445 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 01:16:59.571017 containerd[1462]: time="2025-07-07T01:16:59.569683630Z" level=error msg="ExecSync for \"91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 7 01:16:59.571017 containerd[1462]: time="2025-07-07T01:16:59.570082279Z" level=info msg="RemoveContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\"" Jul 7 01:16:59.559423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834-rootfs.mount: Deactivated successfully. Jul 7 01:16:59.571292 kubelet[2614]: E0707 01:16:59.533951 2614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{tigera-operator-747864d56d-ksmf8.184fd32a43c7f040 tigera-operator 1336 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:tigera-operator,Name:tigera-operator-747864d56d-ksmf8,UID:2980795e-09f7-4095-957d-e01c74f573a0,APIVersion:v1,ResourceVersion:369,FieldPath:spec.containers{tigera-operator},},Reason:Pulled,Message:Container image \"quay.io/tigera/operator:v1.38.3\" already present on machine,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-2961e92ed0.novalocal,},FirstTimestamp:2025-07-07 01:15:58 +0000 UTC,LastTimestamp:2025-07-07 01:16:23.967954123 +0000 UTC m=+172.585694552,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-2961e92ed0.novalocal,}" Jul 7 01:16:59.573899 update_engine[1445]: I20250707 01:16:59.572780 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 01:16:59.573899 update_engine[1445]: I20250707 01:16:59.572818 1445 omaha_request_action.cc:272] Request: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: Jul 7 01:16:59.573899 update_engine[1445]: I20250707 01:16:59.572839 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:16:59.575165 containerd[1462]: time="2025-07-07T01:16:59.575040390Z" level=error msg="RemoveContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\" failed" error="failed to set removing state for container \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\": container is already in removing state" Jul 7 01:16:59.576578 kubelet[2614]: E0707 01:16:59.575869 2614 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\": container is already in removing state" containerID="64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11" Jul 7 01:16:59.576578 kubelet[2614]: E0707 01:16:59.575975 2614 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to set removing state for container \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\": container is already in removing state" containerID="64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11" Jul 7 01:16:59.576578 kubelet[2614]: E0707 01:16:59.576155 2614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jul 7 01:16:59.583339 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 01:16:59.614408 sshd[6341]: pam_unix(sshd:session): session closed for user core Jul 7 01:16:59.629438 systemd[1]: sshd@15-172.24.4.54:22-172.24.4.1:42672.service: Deactivated successfully. Jul 7 01:16:59.640999 containerd[1462]: time="2025-07-07T01:16:59.637520327Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jul 7 01:16:59.640440 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 01:16:59.651111 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jul 7 01:16:59.656900 update_engine[1445]: I20250707 01:16:59.654233 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:16:59.656900 update_engine[1445]: I20250707 01:16:59.654670 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:16:59.660436 systemd-logind[1444]: Removed session 18. Jul 7 01:16:59.668922 update_engine[1445]: E20250707 01:16:59.668833 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:16:59.669081 update_engine[1445]: I20250707 01:16:59.668993 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 01:16:59.893944 kubelet[2614]: E0707 01:16:59.893855 2614 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-0-2961e92ed0.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 01:16:59.927775 containerd[1462]: time="2025-07-07T01:16:59.927719530Z" level=info msg="RemoveContainer for \"64d31f80eee991e48fdc3eecacd6f63cf24321d7d89b9bddc653497cb45e3b11\" returns successfully" Jul 7 01:17:00.557265 systemd[1]: run-containerd-runc-k8s.io-c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df-runc.r3bDO8.mount: Deactivated successfully. Jul 7 01:17:00.726711 containerd[1462]: time="2025-07-07T01:17:00.726595393Z" level=info msg="TaskExit event container_id:\"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" id:\"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" pid:6265 exit_status:1 exited_at:{seconds:1751850995 nanos:48069405}" Jul 7 01:17:01.785549 containerd[1462]: time="2025-07-07T01:17:01.784784504Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Jul 7 01:17:01.860321 containerd[1462]: time="2025-07-07T01:17:01.860272184Z" level=info msg="shim disconnected" id=45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834 namespace=k8s.io Jul 7 01:17:01.860321 containerd[1462]: time="2025-07-07T01:17:01.860315836Z" level=warning msg="cleaning up after shim disconnected" id=45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834 namespace=k8s.io Jul 7 01:17:01.860585 containerd[1462]: time="2025-07-07T01:17:01.860336705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:17:05.199433 kubelet[2614]: I0707 01:17:05.198830 2614 scope.go:117] "RemoveContainer" containerID="4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66" Jul 7 01:17:05.201595 kubelet[2614]: I0707 01:17:05.201511 2614 scope.go:117] "RemoveContainer" containerID="45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834" Jul 7 01:17:05.210980 containerd[1462]: time="2025-07-07T01:17:05.210091391Z" level=info msg="RemoveContainer for \"4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66\"" Jul 7 01:17:05.211596 containerd[1462]: time="2025-07-07T01:17:05.211462674Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jul 7 01:17:05.594336 systemd[1]: Started sshd@16-172.24.4.54:22-172.24.4.1:46678.service - OpenSSH per-connection server daemon (172.24.4.1:46678). Jul 7 01:17:10.069460 kubelet[2614]: E0707 01:17:10.065993 2614 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal.184fd32c2f299e37 kube-system 1403 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal,UID:f1b00cfc6c85dfe639649a5e83ae72a3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-2961e92ed0.novalocal,},FirstTimestamp:2025-07-07 01:16:06 +0000 UTC,LastTimestamp:2025-07-07 01:16:36.121516604 +0000 UTC m=+184.739257043,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-2961e92ed0.novalocal,}" Jul 7 01:17:10.119402 containerd[1462]: time="2025-07-07T01:17:10.118018965Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809\"" Jul 7 01:17:10.128823 containerd[1462]: time="2025-07-07T01:17:10.126639388Z" level=info msg="StartContainer for \"6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809\"" Jul 7 01:17:10.154023 update_engine[1445]: I20250707 01:17:10.152667 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:17:10.156136 update_engine[1445]: I20250707 01:17:10.154803 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:17:10.159187 update_engine[1445]: I20250707 01:17:10.158622 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:17:10.169148 update_engine[1445]: E20250707 01:17:10.169004 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:17:10.169370 update_engine[1445]: I20250707 01:17:10.169188 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 01:17:10.282972 containerd[1462]: time="2025-07-07T01:17:10.281658372Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6\"" Jul 7 01:17:10.283648 containerd[1462]: time="2025-07-07T01:17:10.283616064Z" level=info msg="StartContainer for \"3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6\"" Jul 7 01:17:10.349802 containerd[1462]: time="2025-07-07T01:17:10.349585437Z" level=info msg="RemoveContainer for \"4716a5a8f00dbfd805396ce72004933f6bdaf01e5f0a705acc7e6b1bf1613d66\" returns successfully" Jul 7 01:17:10.366321 systemd[1]: Started cri-containerd-6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809.scope - libcontainer container 6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809. Jul 7 01:17:10.492138 systemd[1]: Started cri-containerd-3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6.scope - libcontainer container 3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6. Jul 7 01:17:10.516940 containerd[1462]: time="2025-07-07T01:17:10.516694382Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be\"" Jul 7 01:17:10.517697 containerd[1462]: time="2025-07-07T01:17:10.517653201Z" level=info msg="StartContainer for \"d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be\"" Jul 7 01:17:10.613204 systemd[1]: Started cri-containerd-d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be.scope - libcontainer container d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be. Jul 7 01:17:10.663276 containerd[1462]: time="2025-07-07T01:17:10.663203030Z" level=info msg="StartContainer for \"6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809\" returns successfully" Jul 7 01:17:10.756491 containerd[1462]: time="2025-07-07T01:17:10.756418107Z" level=info msg="StartContainer for \"d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be\" returns successfully" Jul 7 01:17:11.012487 containerd[1462]: time="2025-07-07T01:17:11.012426552Z" level=info msg="StartContainer for \"3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6\" returns successfully" Jul 7 01:17:11.296716 sshd[6639]: Accepted publickey for core from 172.24.4.1 port 46678 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:11.298796 sshd[6639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:11.308922 systemd-logind[1444]: New session 19 of user core. Jul 7 01:17:11.314480 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 01:17:12.040239 sshd[6639]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:12.046498 systemd[1]: sshd@16-172.24.4.54:22-172.24.4.1:46678.service: Deactivated successfully. Jul 7 01:17:12.053045 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 01:17:12.056162 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jul 7 01:17:12.057270 systemd-logind[1444]: Removed session 19. Jul 7 01:17:17.078493 systemd[1]: Started sshd@17-172.24.4.54:22-172.24.4.1:58240.service - OpenSSH per-connection server daemon (172.24.4.1:58240). Jul 7 01:17:18.549764 sshd[6815]: Accepted publickey for core from 172.24.4.1 port 58240 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:18.553244 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:18.568032 systemd-logind[1444]: New session 20 of user core. Jul 7 01:17:18.576227 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 01:17:19.480826 sshd[6815]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:19.486796 systemd[1]: sshd@17-172.24.4.54:22-172.24.4.1:58240.service: Deactivated successfully. Jul 7 01:17:19.491525 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 01:17:19.496984 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jul 7 01:17:19.500254 systemd-logind[1444]: Removed session 20. Jul 7 01:17:20.145485 update_engine[1445]: I20250707 01:17:20.144579 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:17:20.148409 update_engine[1445]: I20250707 01:17:20.147505 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:17:20.148409 update_engine[1445]: I20250707 01:17:20.148244 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:17:20.158991 update_engine[1445]: E20250707 01:17:20.158777 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:17:20.159237 update_engine[1445]: I20250707 01:17:20.158990 1445 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 01:17:24.514266 systemd[1]: Started sshd@18-172.24.4.54:22-172.24.4.1:40558.service - OpenSSH per-connection server daemon (172.24.4.1:40558). Jul 7 01:17:25.697737 sshd[6831]: Accepted publickey for core from 172.24.4.1 port 40558 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:25.703435 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:25.732791 systemd-logind[1444]: New session 21 of user core. Jul 7 01:17:25.743397 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 01:17:26.450949 sshd[6831]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:26.463895 systemd[1]: sshd@18-172.24.4.54:22-172.24.4.1:40558.service: Deactivated successfully. Jul 7 01:17:26.473321 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 01:17:26.475711 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jul 7 01:17:26.478998 systemd-logind[1444]: Removed session 21. Jul 7 01:17:30.147780 update_engine[1445]: I20250707 01:17:30.147639 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:17:30.149210 update_engine[1445]: I20250707 01:17:30.149178 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:17:30.149815 update_engine[1445]: I20250707 01:17:30.149706 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:17:30.159767 update_engine[1445]: E20250707 01:17:30.159693 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:17:30.159911 update_engine[1445]: I20250707 01:17:30.159772 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 01:17:30.159911 update_engine[1445]: I20250707 01:17:30.159795 1445 omaha_request_action.cc:617] Omaha request response: Jul 7 01:17:30.160023 update_engine[1445]: E20250707 01:17:30.159998 1445 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 01:17:30.160267 update_engine[1445]: I20250707 01:17:30.160234 1445 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 01:17:30.160267 update_engine[1445]: I20250707 01:17:30.160250 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:17:30.160267 update_engine[1445]: I20250707 01:17:30.160256 1445 update_attempter.cc:306] Processing Done. Jul 7 01:17:30.160371 update_engine[1445]: E20250707 01:17:30.160302 1445 update_attempter.cc:619] Update failed. Jul 7 01:17:30.160371 update_engine[1445]: I20250707 01:17:30.160317 1445 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 01:17:30.160371 update_engine[1445]: I20250707 01:17:30.160324 1445 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 01:17:30.160371 update_engine[1445]: I20250707 01:17:30.160336 1445 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 01:17:30.160716 update_engine[1445]: I20250707 01:17:30.160485 1445 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 01:17:30.160716 update_engine[1445]: I20250707 01:17:30.160539 1445 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 01:17:30.160716 update_engine[1445]: I20250707 01:17:30.160547 1445 omaha_request_action.cc:272] Request: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: Jul 7 01:17:30.160716 update_engine[1445]: I20250707 01:17:30.160554 1445 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:17:30.161810 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 01:17:30.162220 update_engine[1445]: I20250707 01:17:30.161929 1445 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:17:30.162220 update_engine[1445]: I20250707 01:17:30.162119 1445 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:17:30.172149 update_engine[1445]: E20250707 01:17:30.172102 1445 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172153 1445 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172163 1445 omaha_request_action.cc:617] Omaha request response: Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172170 1445 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172176 1445 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172181 1445 update_attempter.cc:306] Processing Done. Jul 7 01:17:30.172219 update_engine[1445]: I20250707 01:17:30.172187 1445 update_attempter.cc:310] Error event sent. Jul 7 01:17:30.172419 update_engine[1445]: I20250707 01:17:30.172214 1445 update_check_scheduler.cc:74] Next update check in 43m58s Jul 7 01:17:30.173662 locksmithd[1470]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 01:17:31.472610 systemd[1]: Started sshd@19-172.24.4.54:22-172.24.4.1:40562.service - OpenSSH per-connection server daemon (172.24.4.1:40562). Jul 7 01:17:32.775907 sshd[6896]: Accepted publickey for core from 172.24.4.1 port 40562 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:32.785949 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:32.805595 systemd-logind[1444]: New session 22 of user core. Jul 7 01:17:32.817410 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 01:17:33.662762 sshd[6896]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:33.668641 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jul 7 01:17:33.670547 systemd[1]: sshd@19-172.24.4.54:22-172.24.4.1:40562.service: Deactivated successfully. Jul 7 01:17:33.681526 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 01:17:33.685468 systemd-logind[1444]: Removed session 22. Jul 7 01:17:44.407306 systemd[1]: Started sshd@20-172.24.4.54:22-172.24.4.1:35646.service - OpenSSH per-connection server daemon (172.24.4.1:35646). Jul 7 01:17:46.888483 sshd[6934]: Accepted publickey for core from 172.24.4.1 port 35646 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:46.893447 sshd[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:46.910984 systemd-logind[1444]: New session 23 of user core. Jul 7 01:17:47.374528 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 01:17:48.435356 sshd[6934]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:48.446492 systemd[1]: sshd@20-172.24.4.54:22-172.24.4.1:35646.service: Deactivated successfully. Jul 7 01:17:48.456357 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 01:17:48.459067 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jul 7 01:17:48.463552 systemd-logind[1444]: Removed session 23. Jul 7 01:17:55.213584 systemd[1]: Started sshd@21-172.24.4.54:22-172.24.4.1:38066.service - OpenSSH per-connection server daemon (172.24.4.1:38066). Jul 7 01:17:56.383247 sshd[6967]: Accepted publickey for core from 172.24.4.1 port 38066 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:56.392207 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:56.416263 systemd-logind[1444]: New session 24 of user core. Jul 7 01:17:56.429940 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 01:17:57.283770 sshd[6967]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:57.306852 systemd[1]: sshd@21-172.24.4.54:22-172.24.4.1:38066.service: Deactivated successfully. Jul 7 01:17:57.319219 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 01:17:57.322056 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jul 7 01:17:57.340319 systemd[1]: Started sshd@22-172.24.4.54:22-172.24.4.1:38082.service - OpenSSH per-connection server daemon (172.24.4.1:38082). Jul 7 01:17:57.343169 systemd-logind[1444]: Removed session 24. Jul 7 01:17:58.779447 sshd[7002]: Accepted publickey for core from 172.24.4.1 port 38082 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:17:58.807088 sshd[7002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:58.821918 systemd-logind[1444]: New session 25 of user core. Jul 7 01:17:58.825346 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 01:17:59.650828 systemd[1]: run-containerd-runc-k8s.io-91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c-runc.WdbtF5.mount: Deactivated successfully. Jul 7 01:18:00.851369 sshd[7002]: pam_unix(sshd:session): session closed for user core Jul 7 01:18:00.873412 systemd[1]: sshd@22-172.24.4.54:22-172.24.4.1:38082.service: Deactivated successfully. Jul 7 01:18:00.881840 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 01:18:00.888599 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jul 7 01:18:00.903289 systemd[1]: Started sshd@23-172.24.4.54:22-172.24.4.1:38096.service - OpenSSH per-connection server daemon (172.24.4.1:38096). Jul 7 01:18:00.909934 systemd-logind[1444]: Removed session 25. Jul 7 01:18:02.409193 sshd[7054]: Accepted publickey for core from 172.24.4.1 port 38096 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:18:02.415260 sshd[7054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:18:02.434770 systemd-logind[1444]: New session 26 of user core. Jul 7 01:18:02.442183 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 01:18:09.057137 kubelet[2614]: E0707 01:18:09.056329 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.441s" Jul 7 01:18:14.856635 systemd[1]: run-containerd-runc-k8s.io-c9d80078d83d589f6f4b724d7434916f7f1beeb976d44dede43078dc26b851df-runc.2hkn1J.mount: Deactivated successfully. Jul 7 01:18:15.877328 systemd[1]: Started sshd@24-172.24.4.54:22-172.24.4.1:54986.service - OpenSSH per-connection server daemon (172.24.4.1:54986). Jul 7 01:18:15.881483 sshd[7054]: pam_unix(sshd:session): session closed for user core Jul 7 01:18:15.899807 systemd[1]: sshd@23-172.24.4.54:22-172.24.4.1:38096.service: Deactivated successfully. Jul 7 01:18:15.909056 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 01:18:15.914634 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Jul 7 01:18:15.916641 systemd-logind[1444]: Removed session 26. Jul 7 01:18:41.884163 systemd[1]: cri-containerd-d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be.scope: Deactivated successfully. Jul 7 01:18:41.885918 systemd[1]: cri-containerd-d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be.scope: Consumed 2.768s CPU time. Jul 7 01:18:41.960585 kubelet[2614]: E0707 01:18:41.958560 2614 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal.184fd32c2f299e37 kube-system 1586 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-0-2961e92ed0.novalocal,UID:f1b00cfc6c85dfe639649a5e83ae72a3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-0-2961e92ed0.novalocal,},FirstTimestamp:2025-07-07 01:16:06 +0000 UTC,LastTimestamp:2025-07-07 01:18:24.086050948 +0000 UTC m=+292.703791438,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-0-2961e92ed0.novalocal,}" Jul 7 01:18:43.400948 kubelet[2614]: E0707 01:18:41.967244 2614 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-0-2961e92ed0.novalocal?timeout=10s\": stream error: stream ID 869; INTERNAL_ERROR; received from peer" Jul 7 01:18:43.400948 kubelet[2614]: E0707 01:18:43.225825 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.61s" Jul 7 01:18:42.126058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be-rootfs.mount: Deactivated successfully. Jul 7 01:18:43.282553 systemd[1]: run-containerd-runc-k8s.io-91b64dbd46bf334023bef43663a42c2c7fd7cd16b42fe9071e0b57d90f24475c-runc.0rmzP9.mount: Deactivated successfully. Jul 7 01:18:43.353591 systemd[1]: cri-containerd-3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6.scope: Deactivated successfully. Jul 7 01:18:43.353990 systemd[1]: cri-containerd-3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6.scope: Consumed 4.823s CPU time. Jul 7 01:18:43.437928 kubelet[2614]: E0707 01:18:43.437691 2614 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-0-2961e92ed0.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 01:18:43.467557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6-rootfs.mount: Deactivated successfully. Jul 7 01:18:43.471750 systemd[1]: cri-containerd-6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809.scope: Deactivated successfully. Jul 7 01:18:43.472122 systemd[1]: cri-containerd-6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809.scope: Consumed 2.635s CPU time. Jul 7 01:18:43.597459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809-rootfs.mount: Deactivated successfully. Jul 7 01:18:46.564142 containerd[1462]: time="2025-07-07T01:18:46.563331866Z" level=info msg="shim disconnected" id=6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809 namespace=k8s.io Jul 7 01:18:46.564142 containerd[1462]: time="2025-07-07T01:18:46.563776491Z" level=warning msg="cleaning up after shim disconnected" id=6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809 namespace=k8s.io Jul 7 01:18:46.564142 containerd[1462]: time="2025-07-07T01:18:46.563824801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:18:46.582937 containerd[1462]: time="2025-07-07T01:18:46.581233058Z" level=info msg="shim disconnected" id=3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6 namespace=k8s.io Jul 7 01:18:46.582937 containerd[1462]: time="2025-07-07T01:18:46.581342605Z" level=warning msg="cleaning up after shim disconnected" id=3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6 namespace=k8s.io Jul 7 01:18:46.582937 containerd[1462]: time="2025-07-07T01:18:46.581354938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:18:46.583512 containerd[1462]: time="2025-07-07T01:18:46.583028388Z" level=info msg="shim disconnected" id=d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be namespace=k8s.io Jul 7 01:18:46.583512 containerd[1462]: time="2025-07-07T01:18:46.583327870Z" level=warning msg="cleaning up after shim disconnected" id=d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be namespace=k8s.io Jul 7 01:18:46.583512 containerd[1462]: time="2025-07-07T01:18:46.583343029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:18:48.192742 sshd[7121]: Accepted publickey for core from 172.24.4.1 port 54986 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:18:47.652930 sshd[7121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:18:48.227088 systemd-logind[1444]: New session 27 of user core. Jul 7 01:18:48.240450 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 01:18:52.391614 kubelet[2614]: E0707 01:18:52.391510 2614 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.197s" Jul 7 01:18:52.396103 kubelet[2614]: I0707 01:18:52.395806 2614 scope.go:117] "RemoveContainer" containerID="d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be" Jul 7 01:18:52.400445 kubelet[2614]: E0707 01:18:52.399057 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(0b2e41f2a0183ea4ce5d03caf464b0f9)\"" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="0b2e41f2a0183ea4ce5d03caf464b0f9" Jul 7 01:18:52.406923 kubelet[2614]: I0707 01:18:52.399339 2614 scope.go:117] "RemoveContainer" containerID="45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834" Jul 7 01:18:52.426654 containerd[1462]: time="2025-07-07T01:18:52.423808305Z" level=info msg="RemoveContainer for \"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\"" Jul 7 01:18:52.430415 kubelet[2614]: I0707 01:18:52.428679 2614 scope.go:117] "RemoveContainer" containerID="6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809" Jul 7 01:18:52.430415 kubelet[2614]: E0707 01:18:52.428991 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(502e76a70bd5eb6e3bcf0fcb81811131)\"" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="502e76a70bd5eb6e3bcf0fcb81811131" Jul 7 01:18:52.431398 kubelet[2614]: I0707 01:18:52.431053 2614 scope.go:117] "RemoveContainer" containerID="3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6" Jul 7 01:18:52.431398 kubelet[2614]: E0707 01:18:52.431187 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-ksmf8_tigera-operator(2980795e-09f7-4095-957d-e01c74f573a0)\"" pod="tigera-operator/tigera-operator-747864d56d-ksmf8" podUID="2980795e-09f7-4095-957d-e01c74f573a0" Jul 7 01:18:52.601833 containerd[1462]: time="2025-07-07T01:18:52.601488045Z" level=info msg="RemoveContainer for \"45dd9d38202d6cf8a9dce26615c50bb6036988bad0e05ee28636e3d739f8d834\" returns successfully" Jul 7 01:18:52.603832 kubelet[2614]: I0707 01:18:52.603116 2614 scope.go:117] "RemoveContainer" containerID="9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13" Jul 7 01:18:52.610915 containerd[1462]: time="2025-07-07T01:18:52.610506568Z" level=info msg="RemoveContainer for \"9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13\"" Jul 7 01:18:53.265524 containerd[1462]: time="2025-07-07T01:18:53.265421802Z" level=info msg="RemoveContainer for \"9e1368b6b06e0949072d5bcb8d5a6c8c4fd79396623db0e14b4b5e6277f87b13\" returns successfully" Jul 7 01:18:53.266597 kubelet[2614]: I0707 01:18:53.266521 2614 scope.go:117] "RemoveContainer" containerID="0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e" Jul 7 01:18:53.269712 containerd[1462]: time="2025-07-07T01:18:53.269275553Z" level=info msg="RemoveContainer for \"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\"" Jul 7 01:18:53.443707 containerd[1462]: time="2025-07-07T01:18:53.443580068Z" level=info msg="RemoveContainer for \"0c1a29241b8c5481d4948696070708a0922bafe326ead3c6babfbe7ba64d7e1e\" returns successfully" Jul 7 01:18:53.464983 kubelet[2614]: I0707 01:18:53.464927 2614 scope.go:117] "RemoveContainer" containerID="d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be" Jul 7 01:18:53.467600 kubelet[2614]: E0707 01:18:53.467483 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(0b2e41f2a0183ea4ce5d03caf464b0f9)\"" pod="kube-system/kube-controller-manager-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="0b2e41f2a0183ea4ce5d03caf464b0f9" Jul 7 01:18:54.433210 sshd[7121]: pam_unix(sshd:session): session closed for user core Jul 7 01:18:54.463801 systemd[1]: sshd@24-172.24.4.54:22-172.24.4.1:54986.service: Deactivated successfully. Jul 7 01:18:54.473010 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 01:18:54.475585 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Jul 7 01:18:54.486195 systemd[1]: Started sshd@25-172.24.4.54:22-172.24.4.1:54536.service - OpenSSH per-connection server daemon (172.24.4.1:54536). Jul 7 01:18:54.490224 systemd-logind[1444]: Removed session 27. Jul 7 01:18:55.656297 kubelet[2614]: I0707 01:18:55.654230 2614 scope.go:117] "RemoveContainer" containerID="6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809" Jul 7 01:18:55.656297 kubelet[2614]: E0707 01:18:55.654619 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal_kube-system(502e76a70bd5eb6e3bcf0fcb81811131)\"" pod="kube-system/kube-scheduler-ci-4081-3-4-0-2961e92ed0.novalocal" podUID="502e76a70bd5eb6e3bcf0fcb81811131" Jul 7 01:18:55.690065 sshd[7292]: Accepted publickey for core from 172.24.4.1 port 54536 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:18:55.695207 sshd[7292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:18:55.707461 systemd-logind[1444]: New session 28 of user core. Jul 7 01:18:55.717199 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 01:18:56.519240 sshd[7292]: pam_unix(sshd:session): session closed for user core Jul 7 01:18:56.528956 systemd[1]: sshd@25-172.24.4.54:22-172.24.4.1:54536.service: Deactivated successfully. Jul 7 01:18:56.534321 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 01:18:56.542412 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Jul 7 01:18:56.546209 systemd-logind[1444]: Removed session 28. Jul 7 01:19:04.877658 systemd[1]: Started sshd@26-172.24.4.54:22-172.24.4.1:34642.service - OpenSSH per-connection server daemon (172.24.4.1:34642). Jul 7 01:19:15.689224 kubelet[2614]: I0707 01:19:06.617001 2614 scope.go:117] "RemoveContainer" containerID="d88bc343d667ba1c4a915e58413529e95bfea8f6bfc7738ac753763a398023be" Jul 7 01:19:15.689224 kubelet[2614]: I0707 01:19:06.617364 2614 scope.go:117] "RemoveContainer" containerID="3cf928e4da501b96d917bffb0bc7782e39d3cb994cb5391ee45d78692f42b7f6" Jul 7 01:19:15.689224 kubelet[2614]: I0707 01:19:06.618692 2614 scope.go:117] "RemoveContainer" containerID="6f5a67b16173efda31a21f2bb6f19b8cb30ad8e26a54875b3bef258b110ad809" Jul 7 01:19:15.689224 kubelet[2614]: E0707 01:19:13.811617 2614 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 7 01:19:15.845010 containerd[1462]: time="2025-07-07T01:19:15.841351919Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Jul 7 01:19:15.848044 kubelet[2614]: E0707 01:19:15.844429 2614 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-0-2961e92ed0.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 01:19:15.925638 containerd[1462]: time="2025-07-07T01:19:15.849480732Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jul 7 01:19:15.925638 containerd[1462]: time="2025-07-07T01:19:15.859925239Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jul 7 01:19:16.169447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242938869.mount: Deactivated successfully. Jul 7 01:19:16.186432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827560327.mount: Deactivated successfully. Jul 7 01:19:16.303909 containerd[1462]: time="2025-07-07T01:19:16.303674777Z" level=info msg="CreateContainer within sandbox \"cee74cdb31d23b8aef020cd66531d5fe943a2ed547a78bf119250ecbd5cab3fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"4574cfd269bf67bf6a15cdca05c7374d8383eaefb346fbdf4ec1487569df9bf7\"" Jul 7 01:19:16.304901 containerd[1462]: time="2025-07-07T01:19:16.304687748Z" level=info msg="StartContainer for \"4574cfd269bf67bf6a15cdca05c7374d8383eaefb346fbdf4ec1487569df9bf7\"" Jul 7 01:19:16.367023 containerd[1462]: time="2025-07-07T01:19:16.365705519Z" level=info msg="CreateContainer within sandbox \"0e854a31e5a1d7ff428680197323e8a22812efa4ad22f9760f2d0a68f599989b\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"a73533a78b3b07e3c88541d3c0b373a78337f0ef187a99c97a3da54370267850\"" Jul 7 01:19:16.368435 containerd[1462]: time="2025-07-07T01:19:16.368359649Z" level=info msg="StartContainer for \"a73533a78b3b07e3c88541d3c0b373a78337f0ef187a99c97a3da54370267850\"" Jul 7 01:19:16.381412 systemd[1]: Started cri-containerd-4574cfd269bf67bf6a15cdca05c7374d8383eaefb346fbdf4ec1487569df9bf7.scope - libcontainer container 4574cfd269bf67bf6a15cdca05c7374d8383eaefb346fbdf4ec1487569df9bf7. Jul 7 01:19:16.440537 containerd[1462]: time="2025-07-07T01:19:16.440386558Z" level=info msg="CreateContainer within sandbox \"ff6b6ca612ce33f2a08e7b5068edaac240189dad7e2a45409c986d9375567d84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"754d147ac8fc0e750c1550453b244bc3ee4c07dd569530cfc4901aaa96d30628\"" Jul 7 01:19:16.453479 containerd[1462]: time="2025-07-07T01:19:16.453100945Z" level=info msg="StartContainer for \"754d147ac8fc0e750c1550453b244bc3ee4c07dd569530cfc4901aaa96d30628\"" Jul 7 01:19:16.454086 systemd[1]: Started cri-containerd-a73533a78b3b07e3c88541d3c0b373a78337f0ef187a99c97a3da54370267850.scope - libcontainer container a73533a78b3b07e3c88541d3c0b373a78337f0ef187a99c97a3da54370267850. Jul 7 01:19:16.557192 systemd[1]: Started cri-containerd-754d147ac8fc0e750c1550453b244bc3ee4c07dd569530cfc4901aaa96d30628.scope - libcontainer container 754d147ac8fc0e750c1550453b244bc3ee4c07dd569530cfc4901aaa96d30628. Jul 7 01:19:16.602886 containerd[1462]: time="2025-07-07T01:19:16.601517679Z" level=info msg="StartContainer for \"4574cfd269bf67bf6a15cdca05c7374d8383eaefb346fbdf4ec1487569df9bf7\" returns successfully" Jul 7 01:19:16.606027 containerd[1462]: time="2025-07-07T01:19:16.603629151Z" level=info msg="StartContainer for \"a73533a78b3b07e3c88541d3c0b373a78337f0ef187a99c97a3da54370267850\" returns successfully" Jul 7 01:19:16.618571 sshd[7370]: Accepted publickey for core from 172.24.4.1 port 34642 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:16.621599 sshd[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:16.640437 systemd-logind[1444]: New session 29 of user core. Jul 7 01:19:16.648219 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 01:19:16.699572 containerd[1462]: time="2025-07-07T01:19:16.699451008Z" level=info msg="StartContainer for \"754d147ac8fc0e750c1550453b244bc3ee4c07dd569530cfc4901aaa96d30628\" returns successfully" Jul 7 01:19:19.093793 sshd[7370]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:23.337173 systemd[1]: sshd@26-172.24.4.54:22-172.24.4.1:34642.service: Deactivated successfully. Jul 7 01:19:23.378392 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 01:19:23.406313 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Jul 7 01:19:23.423678 systemd[1]: Started sshd@27-172.24.4.54:22-172.24.4.1:48756.service - OpenSSH per-connection server daemon (172.24.4.1:48756). Jul 7 01:19:23.427164 systemd-logind[1444]: Removed session 29. Jul 7 01:19:24.544087 sshd[7554]: Accepted publickey for core from 172.24.4.1 port 48756 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:24.553030 sshd[7554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:24.578930 systemd-logind[1444]: New session 30 of user core. Jul 7 01:19:24.586358 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 7 01:19:25.534624 sshd[7554]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:25.539817 systemd[1]: sshd@27-172.24.4.54:22-172.24.4.1:48756.service: Deactivated successfully. Jul 7 01:19:25.543436 systemd[1]: session-30.scope: Deactivated successfully. Jul 7 01:19:25.545412 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Jul 7 01:19:25.547394 systemd-logind[1444]: Removed session 30. Jul 7 01:19:30.559218 systemd[1]: Started sshd@28-172.24.4.54:22-172.24.4.1:41160.service - OpenSSH per-connection server daemon (172.24.4.1:41160). Jul 7 01:19:31.732973 sshd[7610]: Accepted publickey for core from 172.24.4.1 port 41160 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:31.736759 sshd[7610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:31.757759 systemd-logind[1444]: New session 31 of user core. Jul 7 01:19:31.765272 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 7 01:19:32.605496 sshd[7610]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:32.615195 systemd[1]: sshd@28-172.24.4.54:22-172.24.4.1:41160.service: Deactivated successfully. Jul 7 01:19:32.623148 systemd[1]: session-31.scope: Deactivated successfully. Jul 7 01:19:32.625409 systemd-logind[1444]: Session 31 logged out. Waiting for processes to exit. Jul 7 01:19:32.628795 systemd-logind[1444]: Removed session 31. Jul 7 01:19:37.651279 systemd[1]: Started sshd@29-172.24.4.54:22-172.24.4.1:44460.service - OpenSSH per-connection server daemon (172.24.4.1:44460). Jul 7 01:19:38.940960 sshd[7625]: Accepted publickey for core from 172.24.4.1 port 44460 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:38.944182 sshd[7625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:38.957977 systemd-logind[1444]: New session 32 of user core. Jul 7 01:19:38.969362 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 7 01:19:39.769542 sshd[7625]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:39.781187 systemd[1]: sshd@29-172.24.4.54:22-172.24.4.1:44460.service: Deactivated successfully. Jul 7 01:19:39.790657 systemd[1]: session-32.scope: Deactivated successfully. Jul 7 01:19:39.794046 systemd-logind[1444]: Session 32 logged out. Waiting for processes to exit. Jul 7 01:19:39.797645 systemd-logind[1444]: Removed session 32. Jul 7 01:19:44.788245 systemd[1]: Started sshd@30-172.24.4.54:22-172.24.4.1:60630.service - OpenSSH per-connection server daemon (172.24.4.1:60630). Jul 7 01:19:46.088925 sshd[7648]: Accepted publickey for core from 172.24.4.1 port 60630 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:46.095097 sshd[7648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:46.114158 systemd-logind[1444]: New session 33 of user core. Jul 7 01:19:46.123113 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 7 01:19:47.029877 sshd[7648]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:47.040978 systemd[1]: sshd@30-172.24.4.54:22-172.24.4.1:60630.service: Deactivated successfully. Jul 7 01:19:47.044727 systemd[1]: session-33.scope: Deactivated successfully. Jul 7 01:19:47.048884 systemd-logind[1444]: Session 33 logged out. Waiting for processes to exit. Jul 7 01:19:47.051985 systemd-logind[1444]: Removed session 33. Jul 7 01:19:52.045424 systemd[1]: Started sshd@31-172.24.4.54:22-172.24.4.1:60640.service - OpenSSH per-connection server daemon (172.24.4.1:60640). Jul 7 01:19:53.342123 sshd[7684]: Accepted publickey for core from 172.24.4.1 port 60640 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:19:53.346578 sshd[7684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:19:53.360189 systemd-logind[1444]: New session 34 of user core. Jul 7 01:19:53.366101 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 7 01:19:54.083816 sshd[7684]: pam_unix(sshd:session): session closed for user core Jul 7 01:19:54.088340 systemd-logind[1444]: Session 34 logged out. Waiting for processes to exit. Jul 7 01:19:54.091454 systemd[1]: sshd@31-172.24.4.54:22-172.24.4.1:60640.service: Deactivated successfully. Jul 7 01:19:54.114570 systemd[1]: session-34.scope: Deactivated successfully. Jul 7 01:19:54.119513 systemd-logind[1444]: Removed session 34. Jul 7 01:19:59.113985 systemd[1]: Started sshd@32-172.24.4.54:22-172.24.4.1:57298.service - OpenSSH per-connection server daemon (172.24.4.1:57298). Jul 7 01:20:00.310062 sshd[7737]: Accepted publickey for core from 172.24.4.1 port 57298 ssh2: RSA SHA256:T4edg9GiQHUVOPN1QnLSslP9ogy2CHxBIWT18axxOTI Jul 7 01:20:00.313122 sshd[7737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:20:00.321091 systemd-logind[1444]: New session 35 of user core. Jul 7 01:20:00.330116 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 7 01:20:01.188262 sshd[7737]: pam_unix(sshd:session): session closed for user core Jul 7 01:20:01.193399 systemd-logind[1444]: Session 35 logged out. Waiting for processes to exit. Jul 7 01:20:01.195152 systemd[1]: sshd@32-172.24.4.54:22-172.24.4.1:57298.service: Deactivated successfully. Jul 7 01:20:01.201542 systemd[1]: session-35.scope: Deactivated successfully. Jul 7 01:20:01.205462 systemd-logind[1444]: Removed session 35.