Jul 11 07:40:06.917513 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 03:36:05 -00 2025 Jul 11 07:40:06.917541 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:40:06.917552 kernel: BIOS-provided physical RAM map: Jul 11 07:40:06.917563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 07:40:06.917572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 07:40:06.917580 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 07:40:06.917609 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 11 07:40:06.917618 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 11 07:40:06.917627 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 07:40:06.917635 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 07:40:06.917644 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 11 07:40:06.917652 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 07:40:06.917665 kernel: NX (Execute Disable) protection: active Jul 11 07:40:06.917673 kernel: APIC: Static calls initialized Jul 11 07:40:06.917683 kernel: SMBIOS 3.0.0 present. Jul 11 07:40:06.917693 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 11 07:40:06.917701 kernel: DMI: Memory slots populated: 1/1 Jul 11 07:40:06.917712 kernel: Hypervisor detected: KVM Jul 11 07:40:06.917721 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 07:40:06.917732 kernel: kvm-clock: using sched offset of 4862229134 cycles Jul 11 07:40:06.917743 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 07:40:06.917753 kernel: tsc: Detected 1996.249 MHz processor Jul 11 07:40:06.917763 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 07:40:06.917773 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 07:40:06.917783 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 11 07:40:06.917793 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 07:40:06.917805 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 07:40:06.917815 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 11 07:40:06.917825 kernel: ACPI: Early table checksum verification disabled Jul 11 07:40:06.917835 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 11 07:40:06.917845 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:40:06.917855 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:40:06.917865 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:40:06.917874 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 11 07:40:06.917884 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:40:06.917896 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 07:40:06.917906 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 11 07:40:06.917916 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 11 07:40:06.917926 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 11 07:40:06.917936 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 11 07:40:06.917950 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 11 07:40:06.917960 kernel: No NUMA configuration found Jul 11 07:40:06.917973 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 11 07:40:06.917984 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 11 07:40:06.917994 kernel: Zone ranges: Jul 11 07:40:06.918004 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 07:40:06.918014 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 11 07:40:06.918025 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 11 07:40:06.918035 kernel: Device empty Jul 11 07:40:06.918045 kernel: Movable zone start for each node Jul 11 07:40:06.918080 kernel: Early memory node ranges Jul 11 07:40:06.918091 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 07:40:06.918101 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 11 07:40:06.918111 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 11 07:40:06.918122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 11 07:40:06.918132 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 07:40:06.918142 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 07:40:06.918153 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 11 07:40:06.918163 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 07:40:06.918177 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 07:40:06.918187 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 07:40:06.918198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 07:40:06.918208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 07:40:06.918219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 07:40:06.918229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 07:40:06.918239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 07:40:06.918251 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 07:40:06.918265 kernel: CPU topo: Max. logical packages: 2 Jul 11 07:40:06.918279 kernel: CPU topo: Max. logical dies: 2 Jul 11 07:40:06.918290 kernel: CPU topo: Max. dies per package: 1 Jul 11 07:40:06.918300 kernel: CPU topo: Max. threads per core: 1 Jul 11 07:40:06.918309 kernel: CPU topo: Num. cores per package: 1 Jul 11 07:40:06.918319 kernel: CPU topo: Num. threads per package: 1 Jul 11 07:40:06.918328 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 11 07:40:06.918338 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 07:40:06.918347 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 11 07:40:06.918357 kernel: Booting paravirtualized kernel on KVM Jul 11 07:40:06.918369 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 07:40:06.918379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 11 07:40:06.918388 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 11 07:40:06.918398 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 11 07:40:06.918408 kernel: pcpu-alloc: [0] 0 1 Jul 11 07:40:06.918417 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 11 07:40:06.918428 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:40:06.918438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 07:40:06.918450 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 07:40:06.918460 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 07:40:06.918469 kernel: Fallback order for Node 0: 0 Jul 11 07:40:06.918479 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 11 07:40:06.918488 kernel: Policy zone: Normal Jul 11 07:40:06.918498 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 07:40:06.918507 kernel: software IO TLB: area num 2. Jul 11 07:40:06.918517 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 11 07:40:06.918527 kernel: ftrace: allocating 40097 entries in 157 pages Jul 11 07:40:06.918538 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 07:40:06.918547 kernel: Dynamic Preempt: voluntary Jul 11 07:40:06.918557 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 07:40:06.918568 kernel: rcu: RCU event tracing is enabled. Jul 11 07:40:06.918578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 11 07:40:06.918588 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 07:40:06.918598 kernel: Rude variant of Tasks RCU enabled. Jul 11 07:40:06.918607 kernel: Tracing variant of Tasks RCU enabled. Jul 11 07:40:06.918617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 07:40:06.918627 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 11 07:40:06.918638 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:40:06.918648 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:40:06.918658 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 11 07:40:06.918668 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 11 07:40:06.918677 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 07:40:06.918687 kernel: Console: colour VGA+ 80x25 Jul 11 07:40:06.918696 kernel: printk: legacy console [tty0] enabled Jul 11 07:40:06.918706 kernel: printk: legacy console [ttyS0] enabled Jul 11 07:40:06.918716 kernel: ACPI: Core revision 20240827 Jul 11 07:40:06.918727 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 07:40:06.918737 kernel: x2apic enabled Jul 11 07:40:06.918746 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 07:40:06.918756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 07:40:06.918766 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 07:40:06.918782 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 11 07:40:06.918794 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 11 07:40:06.918805 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 11 07:40:06.918815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 07:40:06.918825 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 07:40:06.918835 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 07:40:06.918847 kernel: Speculative Store Bypass: Vulnerable Jul 11 07:40:06.918858 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 11 07:40:06.918868 kernel: Freeing SMP alternatives memory: 32K Jul 11 07:40:06.918878 kernel: pid_max: default: 32768 minimum: 301 Jul 11 07:40:06.918888 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 07:40:06.918900 kernel: landlock: Up and running. Jul 11 07:40:06.918910 kernel: SELinux: Initializing. Jul 11 07:40:06.918920 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 07:40:06.918931 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 07:40:06.918941 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 11 07:40:06.918951 kernel: Performance Events: AMD PMU driver. Jul 11 07:40:06.918961 kernel: ... version: 0 Jul 11 07:40:06.918971 kernel: ... bit width: 48 Jul 11 07:40:06.918981 kernel: ... generic registers: 4 Jul 11 07:40:06.918993 kernel: ... value mask: 0000ffffffffffff Jul 11 07:40:06.919003 kernel: ... max period: 00007fffffffffff Jul 11 07:40:06.919013 kernel: ... fixed-purpose events: 0 Jul 11 07:40:06.919023 kernel: ... event mask: 000000000000000f Jul 11 07:40:06.919033 kernel: signal: max sigframe size: 1440 Jul 11 07:40:06.919043 kernel: rcu: Hierarchical SRCU implementation. Jul 11 07:40:06.919066 kernel: rcu: Max phase no-delay instances is 400. Jul 11 07:40:06.919077 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 07:40:06.919088 kernel: smp: Bringing up secondary CPUs ... Jul 11 07:40:06.919098 kernel: smpboot: x86: Booting SMP configuration: Jul 11 07:40:06.919110 kernel: .... node #0, CPUs: #1 Jul 11 07:40:06.919120 kernel: smp: Brought up 1 node, 2 CPUs Jul 11 07:40:06.919130 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 11 07:40:06.919141 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54620K init, 2348K bss, 227296K reserved, 0K cma-reserved) Jul 11 07:40:06.919151 kernel: devtmpfs: initialized Jul 11 07:40:06.919161 kernel: x86/mm: Memory block size: 128MB Jul 11 07:40:06.919171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 07:40:06.919181 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 11 07:40:06.919193 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 07:40:06.919203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 07:40:06.919214 kernel: audit: initializing netlink subsys (disabled) Jul 11 07:40:06.919224 kernel: audit: type=2000 audit(1752219603.463:1): state=initialized audit_enabled=0 res=1 Jul 11 07:40:06.919234 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 07:40:06.919244 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 07:40:06.919254 kernel: cpuidle: using governor menu Jul 11 07:40:06.919264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 07:40:06.919274 kernel: dca service started, version 1.12.1 Jul 11 07:40:06.919286 kernel: PCI: Using configuration type 1 for base access Jul 11 07:40:06.919297 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 07:40:06.919307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 07:40:06.919317 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 07:40:06.919327 kernel: ACPI: Added _OSI(Module Device) Jul 11 07:40:06.919337 kernel: ACPI: Added _OSI(Processor Device) Jul 11 07:40:06.919347 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 07:40:06.919357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 07:40:06.919367 kernel: ACPI: Interpreter enabled Jul 11 07:40:06.919377 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 07:40:06.919389 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 07:40:06.919399 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 07:40:06.919410 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 07:40:06.919420 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 11 07:40:06.919430 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 07:40:06.919582 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 11 07:40:06.919680 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 11 07:40:06.919776 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 11 07:40:06.919791 kernel: acpiphp: Slot [3] registered Jul 11 07:40:06.919802 kernel: acpiphp: Slot [4] registered Jul 11 07:40:06.919812 kernel: acpiphp: Slot [5] registered Jul 11 07:40:06.919822 kernel: acpiphp: Slot [6] registered Jul 11 07:40:06.919832 kernel: acpiphp: Slot [7] registered Jul 11 07:40:06.919842 kernel: acpiphp: Slot [8] registered Jul 11 07:40:06.919852 kernel: acpiphp: Slot [9] registered Jul 11 07:40:06.919862 kernel: acpiphp: Slot [10] registered Jul 11 07:40:06.919875 kernel: acpiphp: Slot [11] registered Jul 11 07:40:06.919885 kernel: acpiphp: Slot [12] registered Jul 11 07:40:06.919895 kernel: acpiphp: Slot [13] registered Jul 11 07:40:06.919905 kernel: acpiphp: Slot [14] registered Jul 11 07:40:06.919915 kernel: acpiphp: Slot [15] registered Jul 11 07:40:06.919925 kernel: acpiphp: Slot [16] registered Jul 11 07:40:06.919935 kernel: acpiphp: Slot [17] registered Jul 11 07:40:06.919945 kernel: acpiphp: Slot [18] registered Jul 11 07:40:06.919954 kernel: acpiphp: Slot [19] registered Jul 11 07:40:06.919966 kernel: acpiphp: Slot [20] registered Jul 11 07:40:06.919976 kernel: acpiphp: Slot [21] registered Jul 11 07:40:06.919986 kernel: acpiphp: Slot [22] registered Jul 11 07:40:06.919996 kernel: acpiphp: Slot [23] registered Jul 11 07:40:06.920006 kernel: acpiphp: Slot [24] registered Jul 11 07:40:06.920016 kernel: acpiphp: Slot [25] registered Jul 11 07:40:06.920025 kernel: acpiphp: Slot [26] registered Jul 11 07:40:06.920035 kernel: acpiphp: Slot [27] registered Jul 11 07:40:06.920045 kernel: acpiphp: Slot [28] registered Jul 11 07:40:06.920071 kernel: acpiphp: Slot [29] registered Jul 11 07:40:06.920084 kernel: acpiphp: Slot [30] registered Jul 11 07:40:06.920094 kernel: acpiphp: Slot [31] registered Jul 11 07:40:06.920104 kernel: PCI host bridge to bus 0000:00 Jul 11 07:40:06.920203 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 07:40:06.920289 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 07:40:06.920379 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 07:40:06.920462 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 07:40:06.920549 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 11 07:40:06.920630 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 07:40:06.920744 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 11 07:40:06.920852 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 11 07:40:06.920954 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 11 07:40:06.921091 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 11 07:40:06.921194 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 11 07:40:06.921286 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 11 07:40:06.921378 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 11 07:40:06.921469 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 11 07:40:06.923195 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 11 07:40:06.923301 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 11 07:40:06.923397 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 11 07:40:06.923508 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 11 07:40:06.923605 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 11 07:40:06.923702 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 11 07:40:06.923797 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 11 07:40:06.923890 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 11 07:40:06.923985 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 07:40:06.925161 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 07:40:06.925271 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 11 07:40:06.925368 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 11 07:40:06.925464 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 11 07:40:06.925558 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 11 07:40:06.925661 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 07:40:06.925759 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 11 07:40:06.925855 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 11 07:40:06.925954 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 11 07:40:06.926076 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 07:40:06.927130 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 11 07:40:06.927236 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 11 07:40:06.927346 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 07:40:06.927449 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 11 07:40:06.927555 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 11 07:40:06.927655 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 11 07:40:06.927671 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 07:40:06.927683 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 07:40:06.927694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 07:40:06.927705 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 07:40:06.927716 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 11 07:40:06.927728 kernel: iommu: Default domain type: Translated Jul 11 07:40:06.927739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 07:40:06.927755 kernel: PCI: Using ACPI for IRQ routing Jul 11 07:40:06.927766 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 07:40:06.927777 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 07:40:06.927788 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 11 07:40:06.927888 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 11 07:40:06.927994 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 11 07:40:06.929184 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 07:40:06.929203 kernel: vgaarb: loaded Jul 11 07:40:06.929214 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 07:40:06.929237 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 07:40:06.929247 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 07:40:06.929258 kernel: pnp: PnP ACPI init Jul 11 07:40:06.929364 kernel: pnp 00:03: [dma 2] Jul 11 07:40:06.929382 kernel: pnp: PnP ACPI: found 5 devices Jul 11 07:40:06.929393 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 07:40:06.929404 kernel: NET: Registered PF_INET protocol family Jul 11 07:40:06.929414 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 07:40:06.929429 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 07:40:06.929439 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 07:40:06.929450 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 07:40:06.929460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 07:40:06.929470 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 07:40:06.929481 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 07:40:06.929491 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 07:40:06.929502 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 07:40:06.929512 kernel: NET: Registered PF_XDP protocol family Jul 11 07:40:06.929604 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 07:40:06.929689 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 07:40:06.929771 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 07:40:06.929852 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 11 07:40:06.929933 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 11 07:40:06.930030 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 11 07:40:06.930170 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 11 07:40:06.930187 kernel: PCI: CLS 0 bytes, default 64 Jul 11 07:40:06.930203 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 11 07:40:06.930214 kernel: software IO TLB: mapped [mem 0x00000000b6000000-0x00000000ba000000] (64MB) Jul 11 07:40:06.930225 kernel: Initialise system trusted keyrings Jul 11 07:40:06.930235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 07:40:06.930245 kernel: Key type asymmetric registered Jul 11 07:40:06.930255 kernel: Asymmetric key parser 'x509' registered Jul 11 07:40:06.930266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 07:40:06.930276 kernel: io scheduler mq-deadline registered Jul 11 07:40:06.930287 kernel: io scheduler kyber registered Jul 11 07:40:06.930299 kernel: io scheduler bfq registered Jul 11 07:40:06.930309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 07:40:06.930320 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 11 07:40:06.930331 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 11 07:40:06.930342 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 11 07:40:06.930352 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 11 07:40:06.930362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 07:40:06.930373 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 07:40:06.930383 kernel: random: crng init done Jul 11 07:40:06.930395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 07:40:06.930405 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 07:40:06.930416 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 07:40:06.930510 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 07:40:06.930530 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 07:40:06.930617 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 07:40:06.930704 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T07:40:06 UTC (1752219606) Jul 11 07:40:06.930794 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 11 07:40:06.930813 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 07:40:06.930824 kernel: NET: Registered PF_INET6 protocol family Jul 11 07:40:06.930834 kernel: Segment Routing with IPv6 Jul 11 07:40:06.930844 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 07:40:06.930855 kernel: NET: Registered PF_PACKET protocol family Jul 11 07:40:06.930865 kernel: Key type dns_resolver registered Jul 11 07:40:06.930875 kernel: IPI shorthand broadcast: enabled Jul 11 07:40:06.930886 kernel: sched_clock: Marking stable (3661019837, 182423122)->(3878026388, -34583429) Jul 11 07:40:06.930896 kernel: registered taskstats version 1 Jul 11 07:40:06.930909 kernel: Loading compiled-in X.509 certificates Jul 11 07:40:06.930920 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 9703a4b3d6547675037b9597aa24472a5380cc2e' Jul 11 07:40:06.930930 kernel: Demotion targets for Node 0: null Jul 11 07:40:06.930940 kernel: Key type .fscrypt registered Jul 11 07:40:06.930950 kernel: Key type fscrypt-provisioning registered Jul 11 07:40:06.930960 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 07:40:06.930970 kernel: ima: Allocated hash algorithm: sha1 Jul 11 07:40:06.930981 kernel: ima: No architecture policies found Jul 11 07:40:06.930992 kernel: clk: Disabling unused clocks Jul 11 07:40:06.931003 kernel: Warning: unable to open an initial console. Jul 11 07:40:06.931013 kernel: Freeing unused kernel image (initmem) memory: 54620K Jul 11 07:40:06.931024 kernel: Write protecting the kernel read-only data: 24576k Jul 11 07:40:06.931034 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 07:40:06.931044 kernel: Run /init as init process Jul 11 07:40:06.934074 kernel: with arguments: Jul 11 07:40:06.934093 kernel: /init Jul 11 07:40:06.934103 kernel: with environment: Jul 11 07:40:06.934117 kernel: HOME=/ Jul 11 07:40:06.934127 kernel: TERM=linux Jul 11 07:40:06.934138 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 07:40:06.934150 systemd[1]: Successfully made /usr/ read-only. Jul 11 07:40:06.934165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 07:40:06.934177 systemd[1]: Detected virtualization kvm. Jul 11 07:40:06.934188 systemd[1]: Detected architecture x86-64. Jul 11 07:40:06.934207 systemd[1]: Running in initrd. Jul 11 07:40:06.934220 systemd[1]: No hostname configured, using default hostname. Jul 11 07:40:06.934232 systemd[1]: Hostname set to . Jul 11 07:40:06.934243 systemd[1]: Initializing machine ID from VM UUID. Jul 11 07:40:06.934254 systemd[1]: Queued start job for default target initrd.target. Jul 11 07:40:06.934266 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:40:06.934279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:40:06.934291 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 07:40:06.934302 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 07:40:06.934314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 07:40:06.934326 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 07:40:06.934339 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 07:40:06.934350 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 07:40:06.934363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:40:06.934374 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:40:06.934386 systemd[1]: Reached target paths.target - Path Units. Jul 11 07:40:06.934397 systemd[1]: Reached target slices.target - Slice Units. Jul 11 07:40:06.934408 systemd[1]: Reached target swap.target - Swaps. Jul 11 07:40:06.934419 systemd[1]: Reached target timers.target - Timer Units. Jul 11 07:40:06.934430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 07:40:06.934442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 07:40:06.934453 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 07:40:06.934466 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 07:40:06.934477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:40:06.934489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 07:40:06.934500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:40:06.934511 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 07:40:06.934523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 07:40:06.934534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 07:40:06.934546 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 07:40:06.934559 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 07:40:06.934571 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 07:40:06.934588 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 07:40:06.934600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 07:40:06.934612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:40:06.934625 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 07:40:06.934661 systemd-journald[213]: Collecting audit messages is disabled. Jul 11 07:40:06.934708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:40:06.934720 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 07:40:06.934753 systemd-journald[213]: Journal started Jul 11 07:40:06.934783 systemd-journald[213]: Runtime Journal (/run/log/journal/8f6c2e65d78b45ef953438b1e0d3fa6e) is 8M, max 78.5M, 70.5M free. Jul 11 07:40:06.952174 systemd-modules-load[215]: Inserted module 'overlay' Jul 11 07:40:06.986194 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 07:40:06.986217 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 07:40:06.986908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:06.991356 kernel: Bridge firewalling registered Jul 11 07:40:06.990159 systemd-modules-load[215]: Inserted module 'br_netfilter' Jul 11 07:40:06.990811 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 07:40:06.994224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 07:40:06.996153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 07:40:06.998153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 07:40:07.006688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 07:40:07.022213 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 07:40:07.024355 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 07:40:07.025759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:40:07.027236 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 07:40:07.030117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 07:40:07.032163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 07:40:07.040386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:40:07.044185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 07:40:07.047257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:40:07.058528 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 07:40:07.090239 systemd-resolved[252]: Positive Trust Anchors: Jul 11 07:40:07.091159 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 07:40:07.092085 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 07:40:07.098736 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 11 07:40:07.100392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 07:40:07.101888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:40:07.133098 kernel: SCSI subsystem initialized Jul 11 07:40:07.146088 kernel: Loading iSCSI transport class v2.0-870. Jul 11 07:40:07.159084 kernel: iscsi: registered transport (tcp) Jul 11 07:40:07.185750 kernel: iscsi: registered transport (qla4xxx) Jul 11 07:40:07.185805 kernel: QLogic iSCSI HBA Driver Jul 11 07:40:07.211726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 07:40:07.253296 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:40:07.261869 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 07:40:07.398827 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 07:40:07.403197 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 07:40:07.526228 kernel: raid6: sse2x4 gen() 5699 MB/s Jul 11 07:40:07.544166 kernel: raid6: sse2x2 gen() 12747 MB/s Jul 11 07:40:07.563477 kernel: raid6: sse2x1 gen() 6783 MB/s Jul 11 07:40:07.563607 kernel: raid6: using algorithm sse2x2 gen() 12747 MB/s Jul 11 07:40:07.582328 kernel: raid6: .... xor() 8739 MB/s, rmw enabled Jul 11 07:40:07.582444 kernel: raid6: using ssse3x2 recovery algorithm Jul 11 07:40:07.611165 kernel: xor: measuring software checksum speed Jul 11 07:40:07.611287 kernel: prefetch64-sse : 12177 MB/sec Jul 11 07:40:07.613319 kernel: generic_sse : 11806 MB/sec Jul 11 07:40:07.616147 kernel: xor: using function: prefetch64-sse (12177 MB/sec) Jul 11 07:40:07.859355 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 07:40:07.867904 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 07:40:07.873692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:40:07.899192 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 11 07:40:07.905272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:40:07.911276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 07:40:07.934111 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jul 11 07:40:07.977495 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 07:40:07.981938 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 07:40:08.087613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:40:08.097398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 07:40:08.195083 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 07:40:08.195145 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 11 07:40:08.199452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:40:08.200288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:08.202721 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:40:08.214106 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 11 07:40:08.211163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:40:08.212025 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:40:08.229273 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 07:40:08.229330 kernel: GPT:17805311 != 20971519 Jul 11 07:40:08.229344 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 07:40:08.229357 kernel: GPT:17805311 != 20971519 Jul 11 07:40:08.229369 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 07:40:08.229383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:40:08.235097 kernel: libata version 3.00 loaded. Jul 11 07:40:08.238231 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 11 07:40:08.244893 kernel: scsi host0: ata_piix Jul 11 07:40:08.245080 kernel: scsi host1: ata_piix Jul 11 07:40:08.245190 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 11 07:40:08.245204 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 11 07:40:08.314851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 07:40:08.320155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:08.340417 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 07:40:08.351947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 07:40:08.361213 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 07:40:08.361958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 07:40:08.367211 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 07:40:08.412253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 07:40:08.419417 disk-uuid[559]: Primary Header is updated. Jul 11 07:40:08.419417 disk-uuid[559]: Secondary Entries is updated. Jul 11 07:40:08.419417 disk-uuid[559]: Secondary Header is updated. Jul 11 07:40:08.422960 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 07:40:08.426011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:40:08.429550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:40:08.438254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 07:40:08.446471 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 07:40:08.465027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:40:08.499874 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 07:40:09.463150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 07:40:09.468791 disk-uuid[563]: The operation has completed successfully. Jul 11 07:40:09.583213 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 07:40:09.583383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 07:40:09.639285 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 07:40:09.664626 sh[585]: Success Jul 11 07:40:09.707382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 07:40:09.707661 kernel: device-mapper: uevent: version 1.0.3 Jul 11 07:40:09.709356 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 07:40:09.739145 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 11 07:40:09.831927 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 07:40:09.841283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 07:40:09.853387 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 07:40:09.880135 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 07:40:09.888724 kernel: BTRFS: device fsid 5947ac9d-360e-47c3-9a17-c6b228910c06 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (598) Jul 11 07:40:09.904405 kernel: BTRFS info (device dm-0): first mount of filesystem 5947ac9d-360e-47c3-9a17-c6b228910c06 Jul 11 07:40:09.904549 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:40:09.904582 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 07:40:09.927536 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 07:40:09.930323 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 07:40:09.931833 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 07:40:09.935322 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 07:40:09.941223 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 07:40:10.008180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (633) Jul 11 07:40:10.017559 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:40:10.017689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:40:10.019977 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:40:10.034199 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:40:10.035506 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 07:40:10.044342 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 07:40:10.111364 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 07:40:10.116787 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 07:40:10.166527 systemd-networkd[767]: lo: Link UP Jul 11 07:40:10.166538 systemd-networkd[767]: lo: Gained carrier Jul 11 07:40:10.167933 systemd-networkd[767]: Enumeration completed Jul 11 07:40:10.168514 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:40:10.168519 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 07:40:10.169641 systemd-networkd[767]: eth0: Link UP Jul 11 07:40:10.169645 systemd-networkd[767]: eth0: Gained carrier Jul 11 07:40:10.169655 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:40:10.170280 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 07:40:10.170898 systemd[1]: Reached target network.target - Network. Jul 11 07:40:10.201429 systemd-networkd[767]: eth0: DHCPv4 address 172.24.4.223/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 11 07:40:10.289266 ignition[692]: Ignition 2.21.0 Jul 11 07:40:10.289286 ignition[692]: Stage: fetch-offline Jul 11 07:40:10.289361 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:10.289377 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:10.292001 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 07:40:10.289564 ignition[692]: parsed url from cmdline: "" Jul 11 07:40:10.289570 ignition[692]: no config URL provided Jul 11 07:40:10.289578 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 07:40:10.289594 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jul 11 07:40:10.289599 ignition[692]: failed to fetch config: resource requires networking Jul 11 07:40:10.289855 ignition[692]: Ignition finished successfully Jul 11 07:40:10.296258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 11 07:40:10.357549 ignition[778]: Ignition 2.21.0 Jul 11 07:40:10.357567 ignition[778]: Stage: fetch Jul 11 07:40:10.357822 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:10.357836 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:10.357965 ignition[778]: parsed url from cmdline: "" Jul 11 07:40:10.357970 ignition[778]: no config URL provided Jul 11 07:40:10.357975 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 07:40:10.357985 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jul 11 07:40:10.358184 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 11 07:40:10.358562 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 11 07:40:10.358698 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 11 07:40:10.626149 ignition[778]: GET result: OK Jul 11 07:40:10.627570 ignition[778]: parsing config with SHA512: 7b61e9b4c2454f2b5145a9a1a357c16c6a1f27a34e9b9853e3e616577f97a192555b8cd9c50c3f789432f304912ace068ca71e6794b3b4cca259b57e18affa04 Jul 11 07:40:10.651307 unknown[778]: fetched base config from "system" Jul 11 07:40:10.651346 unknown[778]: fetched base config from "system" Jul 11 07:40:10.652257 ignition[778]: fetch: fetch complete Jul 11 07:40:10.651374 unknown[778]: fetched user config from "openstack" Jul 11 07:40:10.652270 ignition[778]: fetch: fetch passed Jul 11 07:40:10.660006 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 11 07:40:10.652394 ignition[778]: Ignition finished successfully Jul 11 07:40:10.665378 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 07:40:10.780290 ignition[784]: Ignition 2.21.0 Jul 11 07:40:10.780429 ignition[784]: Stage: kargs Jul 11 07:40:10.787311 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 07:40:10.780737 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:10.780755 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:10.782280 ignition[784]: kargs: kargs passed Jul 11 07:40:10.793559 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 07:40:10.782363 ignition[784]: Ignition finished successfully Jul 11 07:40:10.830428 ignition[791]: Ignition 2.21.0 Jul 11 07:40:10.830445 ignition[791]: Stage: disks Jul 11 07:40:10.830856 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:10.830890 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:10.836251 ignition[791]: disks: disks passed Jul 11 07:40:10.837911 ignition[791]: Ignition finished successfully Jul 11 07:40:10.840648 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 07:40:10.841918 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 07:40:10.843342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 07:40:10.845609 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 07:40:10.847775 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 07:40:10.849567 systemd[1]: Reached target basic.target - Basic System. Jul 11 07:40:10.852847 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 07:40:10.891550 systemd-fsck[799]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 11 07:40:10.903293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 07:40:10.908557 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 07:40:11.146635 kernel: EXT4-fs (vda9): mounted filesystem 68e263c6-913a-4fa8-894f-6e89b186e148 r/w with ordered data mode. Quota mode: none. Jul 11 07:40:11.149048 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 07:40:11.152131 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 07:40:11.156857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 07:40:11.161231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 07:40:11.174908 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 07:40:11.181620 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 11 07:40:11.186891 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 07:40:11.189563 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 07:40:11.198329 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 07:40:11.206646 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 07:40:11.224537 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (807) Jul 11 07:40:11.224564 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:40:11.224588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:40:11.224615 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:40:11.234986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 07:40:11.333200 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:11.356569 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 07:40:11.364623 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 11 07:40:11.371809 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 07:40:11.380178 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 07:40:11.480476 systemd-networkd[767]: eth0: Gained IPv6LL Jul 11 07:40:11.581643 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 07:40:11.586796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 07:40:11.597664 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 07:40:11.635499 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 07:40:11.642577 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:40:11.660788 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 07:40:11.707173 ignition[926]: INFO : Ignition 2.21.0 Jul 11 07:40:11.709369 ignition[926]: INFO : Stage: mount Jul 11 07:40:11.709369 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:11.709369 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:11.712596 ignition[926]: INFO : mount: mount passed Jul 11 07:40:11.712596 ignition[926]: INFO : Ignition finished successfully Jul 11 07:40:11.713312 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 07:40:12.387150 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:14.407164 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:18.426156 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:18.446789 coreos-metadata[809]: Jul 11 07:40:18.446 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:40:18.500155 coreos-metadata[809]: Jul 11 07:40:18.499 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 11 07:40:18.517519 coreos-metadata[809]: Jul 11 07:40:18.517 INFO Fetch successful Jul 11 07:40:18.519771 coreos-metadata[809]: Jul 11 07:40:18.519 INFO wrote hostname ci-4392-0-0-n-cdb6f4f5a9.novalocal to /sysroot/etc/hostname Jul 11 07:40:18.531043 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 11 07:40:18.531511 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 11 07:40:18.543043 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 07:40:18.596006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 07:40:18.638187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (941) Jul 11 07:40:18.645777 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 07:40:18.645870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 07:40:18.649658 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 07:40:18.663993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 07:40:18.724391 ignition[959]: INFO : Ignition 2.21.0 Jul 11 07:40:18.724391 ignition[959]: INFO : Stage: files Jul 11 07:40:18.727371 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:18.727371 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:18.732341 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jul 11 07:40:18.734719 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 07:40:18.734719 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 07:40:18.739566 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 07:40:18.739566 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 07:40:18.743728 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 07:40:18.740989 unknown[959]: wrote ssh authorized keys file for user: core Jul 11 07:40:18.747381 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 07:40:18.747381 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 07:40:18.838936 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 07:40:19.629910 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 07:40:19.629910 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 07:40:19.635049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:40:19.650312 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 07:40:20.426928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 07:40:22.114807 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 07:40:22.114807 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 07:40:22.120097 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 07:40:22.124989 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 07:40:22.124989 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 07:40:22.124989 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 11 07:40:22.132987 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 07:40:22.132987 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 07:40:22.132987 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 07:40:22.132987 ignition[959]: INFO : files: files passed Jul 11 07:40:22.132987 ignition[959]: INFO : Ignition finished successfully Jul 11 07:40:22.127782 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 07:40:22.133196 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 07:40:22.138176 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 07:40:22.151836 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 07:40:22.151942 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 07:40:22.166847 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:40:22.166847 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:40:22.168486 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 07:40:22.169357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 07:40:22.171797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 07:40:22.175840 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 07:40:22.232203 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 07:40:22.233320 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 07:40:22.234810 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 07:40:22.236559 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 07:40:22.238881 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 07:40:22.239817 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 07:40:22.269501 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 07:40:22.273195 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 07:40:22.305017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:40:22.308812 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:40:22.310708 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 07:40:22.313790 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 07:40:22.314297 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 07:40:22.317296 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 07:40:22.319222 systemd[1]: Stopped target basic.target - Basic System. Jul 11 07:40:22.322204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 07:40:22.324871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 07:40:22.327532 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 07:40:22.330432 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 07:40:22.333763 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 07:40:22.336693 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 07:40:22.339934 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 07:40:22.342761 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 07:40:22.346183 systemd[1]: Stopped target swap.target - Swaps. Jul 11 07:40:22.348879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 07:40:22.349356 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 07:40:22.352366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:40:22.354418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:40:22.356881 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 07:40:22.357658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:40:22.360012 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 07:40:22.360599 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 07:40:22.364326 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 07:40:22.364666 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 07:40:22.367669 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 07:40:22.367951 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 07:40:22.373546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 07:40:22.377417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 07:40:22.379367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:40:22.389692 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 07:40:22.394651 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 07:40:22.395006 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:40:22.401395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 07:40:22.401530 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 07:40:22.408185 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 07:40:22.408832 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 07:40:22.426508 ignition[1013]: INFO : Ignition 2.21.0 Jul 11 07:40:22.428153 ignition[1013]: INFO : Stage: umount Jul 11 07:40:22.428153 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 07:40:22.428153 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 11 07:40:22.431012 ignition[1013]: INFO : umount: umount passed Jul 11 07:40:22.431689 ignition[1013]: INFO : Ignition finished successfully Jul 11 07:40:22.434861 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 07:40:22.435944 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 07:40:22.438210 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 07:40:22.439680 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 07:40:22.440416 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 07:40:22.441627 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 07:40:22.441680 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 07:40:22.442758 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 11 07:40:22.442804 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 11 07:40:22.443336 systemd[1]: Stopped target network.target - Network. Jul 11 07:40:22.444443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 07:40:22.444495 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 07:40:22.445576 systemd[1]: Stopped target paths.target - Path Units. Jul 11 07:40:22.446574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 07:40:22.446808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:40:22.447646 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 07:40:22.448781 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 07:40:22.449967 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 07:40:22.450082 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 07:40:22.450972 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 07:40:22.451008 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 07:40:22.452201 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 07:40:22.452264 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 07:40:22.453422 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 07:40:22.453466 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 07:40:22.454581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 07:40:22.455883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 07:40:22.458482 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 07:40:22.458577 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 07:40:22.460040 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 07:40:22.460162 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 07:40:22.471994 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 07:40:22.472179 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 07:40:22.476357 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 07:40:22.476607 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 07:40:22.478082 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 07:40:22.480386 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 07:40:22.481864 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 07:40:22.482524 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 07:40:22.482590 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:40:22.484558 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 07:40:22.486375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 07:40:22.486430 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 07:40:22.487568 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 07:40:22.487619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:40:22.490095 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 07:40:22.490146 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 07:40:22.492095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 07:40:22.492156 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:40:22.493820 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:40:22.495920 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 07:40:22.495990 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:40:22.506792 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 07:40:22.512222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:40:22.513728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 07:40:22.513778 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 07:40:22.515040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 07:40:22.515116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:40:22.516286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 07:40:22.516349 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 07:40:22.517978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 07:40:22.518037 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 07:40:22.519212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 07:40:22.519262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 07:40:22.521044 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 07:40:22.522414 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 07:40:22.522495 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:40:22.524774 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 07:40:22.524826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:40:22.527003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:40:22.527111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:22.531786 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 07:40:22.531858 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 07:40:22.531908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:40:22.532349 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 07:40:22.532452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 07:40:22.537878 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 07:40:22.538014 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 07:40:22.539879 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 07:40:22.541476 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 07:40:22.561229 systemd[1]: Switching root. Jul 11 07:40:22.601848 systemd-journald[213]: Journal stopped Jul 11 07:40:24.554864 systemd-journald[213]: Received SIGTERM from PID 1 (systemd). Jul 11 07:40:24.554959 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 07:40:24.554978 kernel: SELinux: policy capability open_perms=1 Jul 11 07:40:24.554990 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 07:40:24.555004 kernel: SELinux: policy capability always_check_network=0 Jul 11 07:40:24.555015 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 07:40:24.555027 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 07:40:24.555039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 07:40:24.555051 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 07:40:24.558102 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 07:40:24.558124 kernel: audit: type=1403 audit(1752219623.285:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 07:40:24.558177 systemd[1]: Successfully loaded SELinux policy in 86.477ms. Jul 11 07:40:24.558199 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.559ms. Jul 11 07:40:24.558214 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 07:40:24.558232 systemd[1]: Detected virtualization kvm. Jul 11 07:40:24.558244 systemd[1]: Detected architecture x86-64. Jul 11 07:40:24.558256 systemd[1]: Detected first boot. Jul 11 07:40:24.558269 systemd[1]: Hostname set to . Jul 11 07:40:24.558285 systemd[1]: Initializing machine ID from VM UUID. Jul 11 07:40:24.558297 zram_generator::config[1057]: No configuration found. Jul 11 07:40:24.558311 kernel: Guest personality initialized and is inactive Jul 11 07:40:24.558323 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 07:40:24.558335 kernel: Initialized host personality Jul 11 07:40:24.558347 kernel: NET: Registered PF_VSOCK protocol family Jul 11 07:40:24.558359 systemd[1]: Populated /etc with preset unit settings. Jul 11 07:40:24.558373 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 07:40:24.558388 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 07:40:24.558400 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 07:40:24.558412 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 07:40:24.558425 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 07:40:24.558441 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 07:40:24.558454 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 07:40:24.558469 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 07:40:24.558482 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 07:40:24.558495 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 07:40:24.558509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 07:40:24.558522 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 07:40:24.558534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 07:40:24.558547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 07:40:24.558560 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 07:40:24.558597 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 07:40:24.558615 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 07:40:24.558628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 07:40:24.558640 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 07:40:24.558653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 07:40:24.558688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 07:40:24.558702 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 07:40:24.558715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 07:40:24.560465 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 07:40:24.560486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 07:40:24.560504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 07:40:24.560517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 07:40:24.560529 systemd[1]: Reached target slices.target - Slice Units. Jul 11 07:40:24.560542 systemd[1]: Reached target swap.target - Swaps. Jul 11 07:40:24.560554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 07:40:24.560566 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 07:40:24.560579 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 07:40:24.560592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 07:40:24.560631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 07:40:24.560648 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 07:40:24.560678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 07:40:24.561194 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 07:40:24.561216 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 07:40:24.561229 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 07:40:24.561241 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:40:24.561253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 07:40:24.561266 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 07:40:24.561278 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 07:40:24.561295 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 07:40:24.561308 systemd[1]: Reached target machines.target - Containers. Jul 11 07:40:24.561320 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 07:40:24.561333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 07:40:24.561345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 07:40:24.561357 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 07:40:24.561370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 07:40:24.561382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 07:40:24.561396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 07:40:24.561409 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 07:40:24.561422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 07:40:24.561435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 07:40:24.561447 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 07:40:24.561460 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 07:40:24.561472 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 07:40:24.561485 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 07:40:24.561499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 07:40:24.561513 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 07:40:24.561527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 07:40:24.561539 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 07:40:24.561552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 07:40:24.561565 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 07:40:24.561581 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 07:40:24.563118 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 07:40:24.563133 systemd[1]: Stopped verity-setup.service. Jul 11 07:40:24.563147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:40:24.563165 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 07:40:24.563180 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 07:40:24.563193 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 07:40:24.563205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 07:40:24.563218 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 07:40:24.563230 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 07:40:24.563244 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 07:40:24.563256 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 07:40:24.563269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 07:40:24.563281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 07:40:24.563296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 07:40:24.563309 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 07:40:24.563322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 07:40:24.563335 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 07:40:24.563348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 07:40:24.563390 systemd-journald[1148]: Collecting audit messages is disabled. Jul 11 07:40:24.563419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 07:40:24.563435 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 07:40:24.563447 kernel: loop: module loaded Jul 11 07:40:24.563460 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 07:40:24.563472 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 07:40:24.563485 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 07:40:24.563500 systemd-journald[1148]: Journal started Jul 11 07:40:24.563528 systemd-journald[1148]: Runtime Journal (/run/log/journal/8f6c2e65d78b45ef953438b1e0d3fa6e) is 8M, max 78.5M, 70.5M free. Jul 11 07:40:24.180335 systemd[1]: Queued start job for default target multi-user.target. Jul 11 07:40:24.201821 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 07:40:24.570113 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 07:40:24.202348 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 07:40:24.579120 kernel: fuse: init (API version 7.41) Jul 11 07:40:24.579190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 07:40:24.579214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 07:40:24.584121 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 07:40:24.593088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 07:40:24.598086 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 07:40:24.605447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 07:40:24.605504 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 07:40:24.612088 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 07:40:24.615115 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 07:40:24.616636 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 07:40:24.618129 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 07:40:24.618913 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 07:40:24.619217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 07:40:24.619968 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 07:40:24.621516 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 07:40:24.645091 kernel: ACPI: bus type drm_connector registered Jul 11 07:40:24.648269 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 07:40:24.650626 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 07:40:24.651895 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 07:40:24.656615 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 07:40:24.665102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 07:40:24.666163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 07:40:24.667962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 07:40:24.673715 kernel: loop0: detected capacity change from 0 to 221472 Jul 11 07:40:24.674172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 07:40:24.692617 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 07:40:24.694681 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 07:40:24.699120 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 07:40:24.715299 systemd-journald[1148]: Time spent on flushing to /var/log/journal/8f6c2e65d78b45ef953438b1e0d3fa6e is 46.081ms for 977 entries. Jul 11 07:40:24.715299 systemd-journald[1148]: System Journal (/var/log/journal/8f6c2e65d78b45ef953438b1e0d3fa6e) is 8M, max 584.8M, 576.8M free. Jul 11 07:40:24.810392 systemd-journald[1148]: Received client request to flush runtime journal. Jul 11 07:40:24.810651 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 07:40:24.764797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 07:40:24.813097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 07:40:24.826462 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 07:40:24.829188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 07:40:24.832017 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 07:40:24.838089 kernel: loop1: detected capacity change from 0 to 8 Jul 11 07:40:24.858467 kernel: loop2: detected capacity change from 0 to 114000 Jul 11 07:40:24.866550 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 11 07:40:24.866969 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jul 11 07:40:24.872538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 07:40:24.904118 kernel: loop3: detected capacity change from 0 to 146488 Jul 11 07:40:24.956101 kernel: loop4: detected capacity change from 0 to 221472 Jul 11 07:40:25.054101 kernel: loop5: detected capacity change from 0 to 8 Jul 11 07:40:25.070547 kernel: loop6: detected capacity change from 0 to 114000 Jul 11 07:40:25.102129 kernel: loop7: detected capacity change from 0 to 146488 Jul 11 07:40:25.164193 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 11 07:40:25.164714 (sd-merge)[1221]: Merged extensions into '/usr'. Jul 11 07:40:25.177827 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 07:40:25.178152 systemd[1]: Reloading... Jul 11 07:40:25.327097 zram_generator::config[1246]: No configuration found. Jul 11 07:40:25.479013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:40:25.599723 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 07:40:25.599977 systemd[1]: Reloading finished in 421 ms. Jul 11 07:40:25.621270 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 07:40:25.634562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 07:40:25.654238 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 07:40:25.659728 systemd[1]: Starting ensure-sysext.service... Jul 11 07:40:25.664273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 07:40:25.689011 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Jul 11 07:40:25.689032 systemd[1]: Reloading... Jul 11 07:40:25.696515 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 07:40:25.696558 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 07:40:25.696887 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 07:40:25.697181 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 07:40:25.698043 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 07:40:25.699446 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jul 11 07:40:25.699503 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jul 11 07:40:25.704482 systemd-udevd[1303]: Using default interface naming scheme 'v255'. Jul 11 07:40:25.707245 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 07:40:25.709751 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 07:40:25.709766 systemd-tmpfiles[1306]: Skipping /boot Jul 11 07:40:25.721702 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 07:40:25.721810 systemd-tmpfiles[1306]: Skipping /boot Jul 11 07:40:25.798096 zram_generator::config[1334]: No configuration found. Jul 11 07:40:26.067340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:40:26.104086 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 07:40:26.138115 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 07:40:26.164077 kernel: ACPI: button: Power Button [PWRF] Jul 11 07:40:26.254668 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 07:40:26.255203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 07:40:26.256289 systemd[1]: Reloading finished in 566 ms. Jul 11 07:40:26.268646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 07:40:26.270573 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 07:40:26.275107 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 11 07:40:26.284889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 07:40:26.303094 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 07:40:26.318073 systemd[1]: Finished ensure-sysext.service. Jul 11 07:40:26.363152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:40:26.366465 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 07:40:26.371469 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 07:40:26.373301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 07:40:26.379419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 07:40:26.388364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 07:40:26.395648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 07:40:26.400283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 07:40:26.402290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 07:40:26.404384 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 07:40:26.406121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 07:40:26.412296 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 07:40:26.418744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 07:40:26.427827 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 07:40:26.435478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 07:40:26.445018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 07:40:26.452443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:40:26.453205 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 07:40:26.453970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 07:40:26.455039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 07:40:26.455855 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 07:40:26.456140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 07:40:26.461349 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 11 07:40:26.462464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 07:40:26.469387 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 07:40:26.471854 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 07:40:26.472480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 07:40:26.476690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 07:40:26.477543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 07:40:26.479486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 07:40:26.527220 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 07:40:26.535948 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 07:40:26.539825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 07:40:26.542171 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 11 07:40:26.548330 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 07:40:26.582751 kernel: Console: switching to colour dummy device 80x25 Jul 11 07:40:26.589670 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 11 07:40:26.589740 kernel: [drm] features: -context_init Jul 11 07:40:26.592671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 07:40:26.593107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:26.598104 kernel: [drm] number of scanouts: 1 Jul 11 07:40:26.602084 kernel: [drm] number of cap sets: 0 Jul 11 07:40:26.602273 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 07:40:26.603875 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 07:40:26.614092 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 11 07:40:26.616374 augenrules[1490]: No rules Jul 11 07:40:26.617598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 07:40:26.618152 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 07:40:26.618500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 07:40:26.645779 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 07:40:26.646082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 07:40:26.659755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 07:40:26.704025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 07:40:26.765803 systemd-resolved[1458]: Positive Trust Anchors: Jul 11 07:40:26.766171 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 07:40:26.766269 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 07:40:26.771868 systemd-resolved[1458]: Using system hostname 'ci-4392-0-0-n-cdb6f4f5a9.novalocal'. Jul 11 07:40:26.774507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 07:40:26.774698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 07:40:26.774881 systemd-networkd[1457]: lo: Link UP Jul 11 07:40:26.774892 systemd-networkd[1457]: lo: Gained carrier Jul 11 07:40:26.777880 systemd-networkd[1457]: Enumeration completed Jul 11 07:40:26.777968 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 07:40:26.778126 systemd[1]: Reached target network.target - Network. Jul 11 07:40:26.780196 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 07:40:26.783158 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:40:26.783169 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 07:40:26.784746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 07:40:26.786952 systemd-networkd[1457]: eth0: Link UP Jul 11 07:40:26.787149 systemd-networkd[1457]: eth0: Gained carrier Jul 11 07:40:26.787166 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 07:40:26.796011 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 07:40:26.796254 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 07:40:26.796407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 07:40:26.796497 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 07:40:26.796565 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 07:40:26.796638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 07:40:26.796690 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 07:40:26.796723 systemd[1]: Reached target paths.target - Path Units. Jul 11 07:40:26.796787 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 07:40:26.797132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 07:40:26.797345 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 07:40:26.797443 systemd[1]: Reached target timers.target - Timer Units. Jul 11 07:40:26.799047 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 07:40:26.801267 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 07:40:26.805339 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 07:40:26.805681 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 07:40:26.805773 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 07:40:26.806989 systemd-networkd[1457]: eth0: DHCPv4 address 172.24.4.223/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 11 07:40:26.808379 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Jul 11 07:40:26.813988 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 07:40:26.814476 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 07:40:26.815568 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 07:40:26.816855 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 07:40:26.817226 systemd[1]: Reached target basic.target - Basic System. Jul 11 07:40:26.817397 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 07:40:26.817447 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 07:40:26.818667 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 07:40:26.821220 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 11 07:40:26.822972 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 07:40:26.828730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 07:40:26.831648 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 07:40:26.833628 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 07:40:26.834192 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 07:40:26.838362 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 07:40:26.841494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 07:40:26.848257 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:26.850573 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 07:40:26.855313 jq[1520]: false Jul 11 07:40:26.855961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 07:40:26.868496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 07:40:26.875318 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 07:40:26.878816 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 07:40:26.879621 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 07:40:26.881107 extend-filesystems[1521]: Found /dev/vda6 Jul 11 07:40:26.881527 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 07:40:26.889450 extend-filesystems[1521]: Found /dev/vda9 Jul 11 07:40:26.889962 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 07:40:26.892421 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 07:40:26.896165 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing passwd entry cache Jul 11 07:40:26.893454 oslogin_cache_refresh[1522]: Refreshing passwd entry cache Jul 11 07:40:26.896813 extend-filesystems[1521]: Checking size of /dev/vda9 Jul 11 07:40:26.897267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 07:40:26.897615 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 07:40:26.898114 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 07:40:26.900508 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 07:40:26.900709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 07:40:26.914580 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting users, quitting Jul 11 07:40:26.914580 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 07:40:26.914564 oslogin_cache_refresh[1522]: Failure getting users, quitting Jul 11 07:40:26.914764 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing group entry cache Jul 11 07:40:26.914595 oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 07:40:26.914661 oslogin_cache_refresh[1522]: Refreshing group entry cache Jul 11 07:40:26.922617 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 07:40:26.923199 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 07:40:26.927010 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting groups, quitting Jul 11 07:40:26.927010 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 07:40:26.926462 oslogin_cache_refresh[1522]: Failure getting groups, quitting Jul 11 07:40:26.926476 oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 07:40:26.927791 jq[1537]: true Jul 11 07:40:26.935884 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 07:40:27.483648 systemd-timesyncd[1459]: Contacted time server 216.82.35.115:123 (0.flatcar.pool.ntp.org). Jul 11 07:40:27.483735 systemd-timesyncd[1459]: Initial clock synchronization to Fri 2025-07-11 07:40:27.483520 UTC. Jul 11 07:40:27.485141 systemd-resolved[1458]: Clock change detected. Flushing caches. Jul 11 07:40:27.489167 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 07:40:27.490402 extend-filesystems[1521]: Resized partition /dev/vda9 Jul 11 07:40:27.496129 extend-filesystems[1562]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 07:40:27.509042 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 11 07:40:27.518623 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 11 07:40:27.518684 update_engine[1535]: I20250711 07:40:27.518357 1535 main.cc:92] Flatcar Update Engine starting Jul 11 07:40:27.567866 jq[1557]: true Jul 11 07:40:27.538022 dbus-daemon[1518]: [system] SELinux support is enabled Jul 11 07:40:27.568431 update_engine[1535]: I20250711 07:40:27.560144 1535 update_check_scheduler.cc:74] Next update check in 8m16s Jul 11 07:40:27.568476 tar[1544]: linux-amd64/helm Jul 11 07:40:27.523939 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 07:40:27.538202 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 07:40:27.550352 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 07:40:27.550389 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 07:40:27.550514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 07:40:27.550531 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 07:40:27.562406 systemd[1]: Started update-engine.service - Update Engine. Jul 11 07:40:27.569396 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 07:40:27.569396 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 07:40:27.569396 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 11 07:40:27.566087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 07:40:27.575186 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Jul 11 07:40:27.573167 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 07:40:27.573685 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 07:40:27.597668 systemd-logind[1532]: New seat seat0. Jul 11 07:40:27.612658 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 07:40:27.613180 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 07:40:27.613525 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 07:40:27.694206 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Jul 11 07:40:27.700247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 07:40:27.721986 systemd[1]: Starting sshkeys.service... Jul 11 07:40:27.756531 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 11 07:40:27.762575 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 11 07:40:27.798136 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:27.955433 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 07:40:28.020998 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 07:40:28.063383 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 07:40:28.076910 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 07:40:28.100584 containerd[1563]: time="2025-07-11T07:40:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 07:40:28.106864 containerd[1563]: time="2025-07-11T07:40:28.104420357Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 07:40:28.109276 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 07:40:28.109633 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 07:40:28.116409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138093311Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.732µs" Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138131483Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138151630Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138400247Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138418842Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138449349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138518198Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138533687Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138911976Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138939187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 07:40:28.139008 containerd[1563]: time="2025-07-11T07:40:28.138953324Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.138964174Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.143159723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.143552109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.143590591Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.143605700Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.143665361Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.144086551Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 07:40:28.144340 containerd[1563]: time="2025-07-11T07:40:28.144158947Z" level=info msg="metadata content store policy set" policy=shared Jul 11 07:40:28.146706 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 07:40:28.152091 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 07:40:28.155855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 07:40:28.156688 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 07:40:28.171266 containerd[1563]: time="2025-07-11T07:40:28.171119772Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171561801Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171597107Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171612586Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171638996Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171654916Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171673641Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171693969Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171710510Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171728744Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171741949Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 07:40:28.171838 containerd[1563]: time="2025-07-11T07:40:28.171767477Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 07:40:28.172845 containerd[1563]: time="2025-07-11T07:40:28.172811755Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 07:40:28.172938 containerd[1563]: time="2025-07-11T07:40:28.172911172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 07:40:28.173098 containerd[1563]: time="2025-07-11T07:40:28.173080720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173162533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173184054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173196307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173208760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173224460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173237935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173248965Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173271888Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173387375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 07:40:28.173432 containerd[1563]: time="2025-07-11T07:40:28.173406190Z" level=info msg="Start snapshots syncer" Jul 11 07:40:28.173991 containerd[1563]: time="2025-07-11T07:40:28.173852437Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 07:40:28.177132 containerd[1563]: time="2025-07-11T07:40:28.174684979Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 07:40:28.177368 containerd[1563]: time="2025-07-11T07:40:28.177186029Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 07:40:28.177368 containerd[1563]: time="2025-07-11T07:40:28.177309792Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 07:40:28.177565 containerd[1563]: time="2025-07-11T07:40:28.177510508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 07:40:28.177565 containerd[1563]: time="2025-07-11T07:40:28.177560782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 07:40:28.177640 containerd[1563]: time="2025-07-11T07:40:28.177581692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 07:40:28.177640 containerd[1563]: time="2025-07-11T07:40:28.177606057Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 07:40:28.177640 containerd[1563]: time="2025-07-11T07:40:28.177625694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 07:40:28.177727 containerd[1563]: time="2025-07-11T07:40:28.177640081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 07:40:28.177727 containerd[1563]: time="2025-07-11T07:40:28.177657904Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 07:40:28.177727 containerd[1563]: time="2025-07-11T07:40:28.177690155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 07:40:28.177727 containerd[1563]: time="2025-07-11T07:40:28.177708139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 07:40:28.177727 containerd[1563]: time="2025-07-11T07:40:28.177726413Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177760286Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177784081Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177800151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177815410Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177825679Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 07:40:28.177845 containerd[1563]: time="2025-07-11T07:40:28.177840707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177858080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177878498Z" level=info msg="runtime interface created" Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177889208Z" level=info msg="created NRI interface" Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177899518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177919014Z" level=info msg="Connect containerd service" Jul 11 07:40:28.178048 containerd[1563]: time="2025-07-11T07:40:28.177956645Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 07:40:28.181888 containerd[1563]: time="2025-07-11T07:40:28.180657650Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 07:40:28.391951 tar[1544]: linux-amd64/LICENSE Jul 11 07:40:28.391951 tar[1544]: linux-amd64/README.md Jul 11 07:40:28.419604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 07:40:28.467053 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:28.488607 containerd[1563]: time="2025-07-11T07:40:28.488528803Z" level=info msg="Start subscribing containerd event" Jul 11 07:40:28.488732 containerd[1563]: time="2025-07-11T07:40:28.488618661Z" level=info msg="Start recovering state" Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488788830Z" level=info msg="Start event monitor" Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488818847Z" level=info msg="Start cni network conf syncer for default" Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488830479Z" level=info msg="Start streaming server" Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488842561Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488851979Z" level=info msg="runtime interface starting up..." Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488859673Z" level=info msg="starting plugins..." Jul 11 07:40:28.489319 containerd[1563]: time="2025-07-11T07:40:28.488874802Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 07:40:28.491725 containerd[1563]: time="2025-07-11T07:40:28.489482972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 07:40:28.491725 containerd[1563]: time="2025-07-11T07:40:28.489574754Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 07:40:28.491725 containerd[1563]: time="2025-07-11T07:40:28.489706051Z" level=info msg="containerd successfully booted in 0.389952s" Jul 11 07:40:28.490110 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 07:40:28.921499 systemd-networkd[1457]: eth0: Gained IPv6LL Jul 11 07:40:28.924454 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 07:40:28.927769 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:28.929317 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 07:40:28.935367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:40:28.937546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 07:40:29.007073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 07:40:30.485137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:30.949078 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:31.331682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:40:31.345778 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:40:33.073354 kubelet[1657]: E0711 07:40:33.073133 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:40:33.080894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:40:33.081093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:40:33.082449 systemd[1]: kubelet.service: Consumed 2.578s CPU time, 266.4M memory peak. Jul 11 07:40:33.131378 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 07:40:33.134217 systemd[1]: Started sshd@0-172.24.4.223:22-172.24.4.1:37818.service - OpenSSH per-connection server daemon (172.24.4.1:37818). Jul 11 07:40:33.286516 login[1618]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jul 11 07:40:33.286765 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 07:40:33.297388 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 07:40:33.298951 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 07:40:33.312158 systemd-logind[1532]: New session 1 of user core. Jul 11 07:40:33.328541 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 07:40:33.332483 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 07:40:33.352211 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 07:40:33.361560 systemd-logind[1532]: New session c1 of user core. Jul 11 07:40:33.601387 systemd[1674]: Queued start job for default target default.target. Jul 11 07:40:33.622275 systemd[1674]: Created slice app.slice - User Application Slice. Jul 11 07:40:33.622306 systemd[1674]: Reached target paths.target - Paths. Jul 11 07:40:33.622356 systemd[1674]: Reached target timers.target - Timers. Jul 11 07:40:33.626079 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 07:40:33.644367 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 07:40:33.644544 systemd[1674]: Reached target sockets.target - Sockets. Jul 11 07:40:33.644614 systemd[1674]: Reached target basic.target - Basic System. Jul 11 07:40:33.644671 systemd[1674]: Reached target default.target - Main User Target. Jul 11 07:40:33.644720 systemd[1674]: Startup finished in 263ms. Jul 11 07:40:33.645798 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 07:40:33.659629 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 07:40:34.120428 sshd[1665]: Accepted publickey for core from 172.24.4.1 port 37818 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:34.126769 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:34.140815 systemd-logind[1532]: New session 3 of user core. Jul 11 07:40:34.149691 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 07:40:34.293318 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 11 07:40:34.301304 systemd-logind[1532]: New session 2 of user core. Jul 11 07:40:34.317400 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 07:40:34.525065 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:34.556087 coreos-metadata[1517]: Jul 11 07:40:34.555 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:40:34.661104 coreos-metadata[1517]: Jul 11 07:40:34.660 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 11 07:40:34.743736 systemd[1]: Started sshd@1-172.24.4.223:22-172.24.4.1:35464.service - OpenSSH per-connection server daemon (172.24.4.1:35464). Jul 11 07:40:34.851267 coreos-metadata[1517]: Jul 11 07:40:34.850 INFO Fetch successful Jul 11 07:40:34.851719 coreos-metadata[1517]: Jul 11 07:40:34.851 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 11 07:40:34.865841 coreos-metadata[1517]: Jul 11 07:40:34.865 INFO Fetch successful Jul 11 07:40:34.865841 coreos-metadata[1517]: Jul 11 07:40:34.865 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 11 07:40:34.879686 coreos-metadata[1517]: Jul 11 07:40:34.879 INFO Fetch successful Jul 11 07:40:34.879918 coreos-metadata[1517]: Jul 11 07:40:34.879 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 11 07:40:34.894808 coreos-metadata[1517]: Jul 11 07:40:34.894 INFO Fetch successful Jul 11 07:40:34.894808 coreos-metadata[1517]: Jul 11 07:40:34.894 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 11 07:40:34.909261 coreos-metadata[1517]: Jul 11 07:40:34.909 INFO Fetch successful Jul 11 07:40:34.909261 coreos-metadata[1517]: Jul 11 07:40:34.909 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 11 07:40:34.923682 coreos-metadata[1517]: Jul 11 07:40:34.923 INFO Fetch successful Jul 11 07:40:34.982090 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 11 07:40:34.992066 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 11 07:40:34.995835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 07:40:35.017742 coreos-metadata[1588]: Jul 11 07:40:35.017 WARN failed to locate config-drive, using the metadata service API instead Jul 11 07:40:35.057797 coreos-metadata[1588]: Jul 11 07:40:35.057 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 11 07:40:35.073286 coreos-metadata[1588]: Jul 11 07:40:35.073 INFO Fetch successful Jul 11 07:40:35.073286 coreos-metadata[1588]: Jul 11 07:40:35.073 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 11 07:40:35.086646 coreos-metadata[1588]: Jul 11 07:40:35.086 INFO Fetch successful Jul 11 07:40:35.094677 unknown[1588]: wrote ssh authorized keys file for user: core Jul 11 07:40:35.145563 update-ssh-keys[1720]: Updated "/home/core/.ssh/authorized_keys" Jul 11 07:40:35.147151 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 11 07:40:35.153811 systemd[1]: Finished sshkeys.service. Jul 11 07:40:35.155958 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 07:40:35.156538 systemd[1]: Startup finished in 3.794s (kernel) + 16.551s (initrd) + 11.412s (userspace) = 31.757s. Jul 11 07:40:36.828149 sshd[1710]: Accepted publickey for core from 172.24.4.1 port 35464 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:36.831097 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:36.844160 systemd-logind[1532]: New session 4 of user core. Jul 11 07:40:36.861302 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 07:40:37.630034 sshd[1724]: Connection closed by 172.24.4.1 port 35464 Jul 11 07:40:37.631119 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jul 11 07:40:37.646192 systemd[1]: sshd@1-172.24.4.223:22-172.24.4.1:35464.service: Deactivated successfully. Jul 11 07:40:37.650380 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 07:40:37.652689 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Jul 11 07:40:37.660310 systemd[1]: Started sshd@2-172.24.4.223:22-172.24.4.1:35476.service - OpenSSH per-connection server daemon (172.24.4.1:35476). Jul 11 07:40:37.662888 systemd-logind[1532]: Removed session 4. Jul 11 07:40:39.029906 sshd[1730]: Accepted publickey for core from 172.24.4.1 port 35476 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:39.033927 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:39.048097 systemd-logind[1532]: New session 5 of user core. Jul 11 07:40:39.059381 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 07:40:39.816257 sshd[1733]: Connection closed by 172.24.4.1 port 35476 Jul 11 07:40:39.817549 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 11 07:40:39.835035 systemd[1]: sshd@2-172.24.4.223:22-172.24.4.1:35476.service: Deactivated successfully. Jul 11 07:40:39.839787 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 07:40:39.842163 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Jul 11 07:40:39.849205 systemd[1]: Started sshd@3-172.24.4.223:22-172.24.4.1:35484.service - OpenSSH per-connection server daemon (172.24.4.1:35484). Jul 11 07:40:39.852479 systemd-logind[1532]: Removed session 5. Jul 11 07:40:41.331478 sshd[1739]: Accepted publickey for core from 172.24.4.1 port 35484 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:41.334454 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:41.346480 systemd-logind[1532]: New session 6 of user core. Jul 11 07:40:41.355279 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 07:40:42.118434 sshd[1742]: Connection closed by 172.24.4.1 port 35484 Jul 11 07:40:42.119941 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 11 07:40:42.137392 systemd[1]: sshd@3-172.24.4.223:22-172.24.4.1:35484.service: Deactivated successfully. Jul 11 07:40:42.141516 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 07:40:42.143906 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Jul 11 07:40:42.151350 systemd[1]: Started sshd@4-172.24.4.223:22-172.24.4.1:35492.service - OpenSSH per-connection server daemon (172.24.4.1:35492). Jul 11 07:40:42.154128 systemd-logind[1532]: Removed session 6. Jul 11 07:40:43.332901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 07:40:43.337323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:40:43.635180 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 35492 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:43.636703 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:43.651099 systemd-logind[1532]: New session 7 of user core. Jul 11 07:40:43.659323 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 07:40:43.764955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:40:43.777271 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:40:43.950702 kubelet[1759]: E0711 07:40:43.950414 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:40:43.959156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:40:43.959503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:40:43.960298 systemd[1]: kubelet.service: Consumed 385ms CPU time, 108.8M memory peak. Jul 11 07:40:44.089544 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 07:40:44.090269 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:40:44.114240 sudo[1767]: pam_unix(sudo:session): session closed for user root Jul 11 07:40:44.323220 sshd[1754]: Connection closed by 172.24.4.1 port 35492 Jul 11 07:40:44.322868 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 11 07:40:44.339730 systemd[1]: sshd@4-172.24.4.223:22-172.24.4.1:35492.service: Deactivated successfully. Jul 11 07:40:44.344869 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 07:40:44.347707 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Jul 11 07:40:44.354445 systemd[1]: Started sshd@5-172.24.4.223:22-172.24.4.1:37456.service - OpenSSH per-connection server daemon (172.24.4.1:37456). Jul 11 07:40:44.358176 systemd-logind[1532]: Removed session 7. Jul 11 07:40:45.760648 sshd[1773]: Accepted publickey for core from 172.24.4.1 port 37456 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:45.765274 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:45.776874 systemd-logind[1532]: New session 8 of user core. Jul 11 07:40:45.790270 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 07:40:46.327217 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 07:40:46.327870 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:40:46.397644 sudo[1778]: pam_unix(sudo:session): session closed for user root Jul 11 07:40:46.411170 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 07:40:46.412621 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:40:46.435786 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 07:40:46.526780 augenrules[1800]: No rules Jul 11 07:40:46.537110 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 07:40:46.537539 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 07:40:46.539401 sudo[1777]: pam_unix(sudo:session): session closed for user root Jul 11 07:40:46.772156 sshd[1776]: Connection closed by 172.24.4.1 port 37456 Jul 11 07:40:46.773422 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jul 11 07:40:46.788690 systemd[1]: sshd@5-172.24.4.223:22-172.24.4.1:37456.service: Deactivated successfully. Jul 11 07:40:46.793218 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 07:40:46.796427 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Jul 11 07:40:46.803602 systemd[1]: Started sshd@6-172.24.4.223:22-172.24.4.1:37472.service - OpenSSH per-connection server daemon (172.24.4.1:37472). Jul 11 07:40:46.805830 systemd-logind[1532]: Removed session 8. Jul 11 07:40:48.202343 sshd[1809]: Accepted publickey for core from 172.24.4.1 port 37472 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:40:48.205353 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:40:48.219090 systemd-logind[1532]: New session 9 of user core. Jul 11 07:40:48.226285 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 07:40:48.767842 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 07:40:48.769377 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 07:40:49.961326 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 07:40:50.010411 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 07:40:50.581561 dockerd[1831]: time="2025-07-11T07:40:50.581430514Z" level=info msg="Starting up" Jul 11 07:40:50.583884 dockerd[1831]: time="2025-07-11T07:40:50.583842668Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 07:40:50.611350 dockerd[1831]: time="2025-07-11T07:40:50.611225815Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 07:40:50.652635 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3438629306-merged.mount: Deactivated successfully. Jul 11 07:40:50.682692 systemd[1]: var-lib-docker-metacopy\x2dcheck127098438-merged.mount: Deactivated successfully. Jul 11 07:40:50.725607 dockerd[1831]: time="2025-07-11T07:40:50.725568786Z" level=info msg="Loading containers: start." Jul 11 07:40:50.767082 kernel: Initializing XFRM netlink socket Jul 11 07:40:51.181889 systemd-networkd[1457]: docker0: Link UP Jul 11 07:40:51.190239 dockerd[1831]: time="2025-07-11T07:40:51.190128217Z" level=info msg="Loading containers: done." Jul 11 07:40:51.225009 dockerd[1831]: time="2025-07-11T07:40:51.224904530Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 07:40:51.225222 dockerd[1831]: time="2025-07-11T07:40:51.225108061Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 07:40:51.226193 dockerd[1831]: time="2025-07-11T07:40:51.225321301Z" level=info msg="Initializing buildkit" Jul 11 07:40:51.271019 dockerd[1831]: time="2025-07-11T07:40:51.270939158Z" level=info msg="Completed buildkit initialization" Jul 11 07:40:51.284144 dockerd[1831]: time="2025-07-11T07:40:51.284023267Z" level=info msg="Daemon has completed initialization" Jul 11 07:40:51.284451 dockerd[1831]: time="2025-07-11T07:40:51.284230245Z" level=info msg="API listen on /run/docker.sock" Jul 11 07:40:51.285105 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 07:40:51.644065 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2889312277-merged.mount: Deactivated successfully. Jul 11 07:40:53.010309 containerd[1563]: time="2025-07-11T07:40:53.009840487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 07:40:53.771600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191870067.mount: Deactivated successfully. Jul 11 07:40:54.209775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 07:40:54.214182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:40:54.487127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:40:54.495481 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:40:54.547858 kubelet[2077]: E0711 07:40:54.547795 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:40:54.550543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:40:54.550684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:40:54.551448 systemd[1]: kubelet.service: Consumed 272ms CPU time, 110.2M memory peak. Jul 11 07:40:56.297727 containerd[1563]: time="2025-07-11T07:40:56.297562799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:56.304267 containerd[1563]: time="2025-07-11T07:40:56.304127461Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 11 07:40:56.314960 containerd[1563]: time="2025-07-11T07:40:56.314833089Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:56.327194 containerd[1563]: time="2025-07-11T07:40:56.327077203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:56.330910 containerd[1563]: time="2025-07-11T07:40:56.330236338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.318873396s" Jul 11 07:40:56.330910 containerd[1563]: time="2025-07-11T07:40:56.330347967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 07:40:56.333492 containerd[1563]: time="2025-07-11T07:40:56.333401414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 07:40:59.265627 containerd[1563]: time="2025-07-11T07:40:59.265050672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:59.268720 containerd[1563]: time="2025-07-11T07:40:59.268670721Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 11 07:40:59.270024 containerd[1563]: time="2025-07-11T07:40:59.269942897Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:59.274829 containerd[1563]: time="2025-07-11T07:40:59.274767315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:40:59.275952 containerd[1563]: time="2025-07-11T07:40:59.275432693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.941935519s" Jul 11 07:40:59.275952 containerd[1563]: time="2025-07-11T07:40:59.275493938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 07:40:59.288678 containerd[1563]: time="2025-07-11T07:40:59.288500522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 07:41:01.109040 containerd[1563]: time="2025-07-11T07:41:01.108577442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:01.113253 containerd[1563]: time="2025-07-11T07:41:01.113141815Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 11 07:41:01.114706 containerd[1563]: time="2025-07-11T07:41:01.114551335Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:01.126644 containerd[1563]: time="2025-07-11T07:41:01.126462125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:01.130108 containerd[1563]: time="2025-07-11T07:41:01.130042809Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.839621117s" Jul 11 07:41:01.131962 containerd[1563]: time="2025-07-11T07:41:01.130512075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 07:41:01.135433 containerd[1563]: time="2025-07-11T07:41:01.135356264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 07:41:02.716329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405480136.mount: Deactivated successfully. Jul 11 07:41:03.539632 containerd[1563]: time="2025-07-11T07:41:03.539530656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:03.541029 containerd[1563]: time="2025-07-11T07:41:03.540962398Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 11 07:41:03.542422 containerd[1563]: time="2025-07-11T07:41:03.542358878Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:03.545285 containerd[1563]: time="2025-07-11T07:41:03.545235877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:03.546217 containerd[1563]: time="2025-07-11T07:41:03.545874307Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.410121353s" Jul 11 07:41:03.546217 containerd[1563]: time="2025-07-11T07:41:03.545940473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 07:41:03.547230 containerd[1563]: time="2025-07-11T07:41:03.547191548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 07:41:04.557534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 07:41:04.562165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645409060.mount: Deactivated successfully. Jul 11 07:41:04.571343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:05.263119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:05.272239 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:41:05.362888 kubelet[2149]: E0711 07:41:05.362711 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:41:05.366964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:41:05.367254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:41:05.368060 systemd[1]: kubelet.service: Consumed 512ms CPU time, 110.6M memory peak. Jul 11 07:41:06.459795 containerd[1563]: time="2025-07-11T07:41:06.459148269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:06.464683 containerd[1563]: time="2025-07-11T07:41:06.461620970Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 11 07:41:06.464898 containerd[1563]: time="2025-07-11T07:41:06.464795788Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:06.470211 containerd[1563]: time="2025-07-11T07:41:06.470099878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:06.471687 containerd[1563]: time="2025-07-11T07:41:06.471629074Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.924399239s" Jul 11 07:41:06.471831 containerd[1563]: time="2025-07-11T07:41:06.471693378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 07:41:06.475077 containerd[1563]: time="2025-07-11T07:41:06.475025176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 07:41:07.051674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751174869.mount: Deactivated successfully. Jul 11 07:41:07.069114 containerd[1563]: time="2025-07-11T07:41:07.068845664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:41:07.071602 containerd[1563]: time="2025-07-11T07:41:07.071400497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 11 07:41:07.072657 containerd[1563]: time="2025-07-11T07:41:07.072503653Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:41:07.079494 containerd[1563]: time="2025-07-11T07:41:07.079318387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 07:41:07.082019 containerd[1563]: time="2025-07-11T07:41:07.081097348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.01437ms" Jul 11 07:41:07.082019 containerd[1563]: time="2025-07-11T07:41:07.081381865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 07:41:07.084278 containerd[1563]: time="2025-07-11T07:41:07.084214803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 07:41:07.715822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57075837.mount: Deactivated successfully. Jul 11 07:41:11.838063 containerd[1563]: time="2025-07-11T07:41:11.837855345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:11.841543 containerd[1563]: time="2025-07-11T07:41:11.841517730Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 11 07:41:11.842866 containerd[1563]: time="2025-07-11T07:41:11.842782802Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:11.850168 containerd[1563]: time="2025-07-11T07:41:11.850100909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:11.853489 containerd[1563]: time="2025-07-11T07:41:11.852936031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.768365554s" Jul 11 07:41:11.853489 containerd[1563]: time="2025-07-11T07:41:11.853060987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 07:41:12.418861 update_engine[1535]: I20250711 07:41:12.418040 1535 update_attempter.cc:509] Updating boot flags... Jul 11 07:41:15.570296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 11 07:41:15.605566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:16.094164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:16.101453 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 07:41:16.214284 kubelet[2298]: E0711 07:41:16.214214 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 07:41:16.219170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 07:41:16.219321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 07:41:16.220134 systemd[1]: kubelet.service: Consumed 541ms CPU time, 110.7M memory peak. Jul 11 07:41:16.408896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:16.411284 systemd[1]: kubelet.service: Consumed 541ms CPU time, 110.7M memory peak. Jul 11 07:41:16.426221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:16.496941 systemd[1]: Reload requested from client PID 2312 ('systemctl') (unit session-9.scope)... Jul 11 07:41:16.497055 systemd[1]: Reloading... Jul 11 07:41:16.660021 zram_generator::config[2357]: No configuration found. Jul 11 07:41:16.828957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:41:16.980077 systemd[1]: Reloading finished in 482 ms. Jul 11 07:41:17.055823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 07:41:17.055915 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 07:41:17.056651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:17.056704 systemd[1]: kubelet.service: Consumed 239ms CPU time, 98.4M memory peak. Jul 11 07:41:17.059314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:17.548299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:17.559248 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 07:41:17.628044 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:41:17.629073 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 07:41:17.629073 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:41:17.629073 kubelet[2423]: I0711 07:41:17.628599 2423 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 07:41:19.041960 kubelet[2423]: I0711 07:41:19.041678 2423 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 07:41:19.044660 kubelet[2423]: I0711 07:41:19.043092 2423 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 07:41:19.044660 kubelet[2423]: I0711 07:41:19.044209 2423 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 07:41:19.256834 kubelet[2423]: E0711 07:41:19.256680 2423 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:19.263455 kubelet[2423]: I0711 07:41:19.263359 2423 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 07:41:19.288337 kubelet[2423]: I0711 07:41:19.288244 2423 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 07:41:19.299181 kubelet[2423]: I0711 07:41:19.298279 2423 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 07:41:19.301046 kubelet[2423]: I0711 07:41:19.299850 2423 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 07:41:19.301046 kubelet[2423]: I0711 07:41:19.300212 2423 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 07:41:19.301046 kubelet[2423]: I0711 07:41:19.300244 2423 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4392-0-0-n-cdb6f4f5a9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 07:41:19.301046 kubelet[2423]: I0711 07:41:19.300633 2423 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 07:41:19.303243 kubelet[2423]: I0711 07:41:19.300648 2423 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 07:41:19.303243 kubelet[2423]: I0711 07:41:19.300957 2423 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:41:19.307932 kubelet[2423]: I0711 07:41:19.307525 2423 kubelet.go:408] "Attempting to sync node with API server" Jul 11 07:41:19.307932 kubelet[2423]: I0711 07:41:19.307614 2423 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 07:41:19.307932 kubelet[2423]: I0711 07:41:19.307713 2423 kubelet.go:314] "Adding apiserver pod source" Jul 11 07:41:19.307932 kubelet[2423]: I0711 07:41:19.307815 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 07:41:19.312737 kubelet[2423]: W0711 07:41:19.312489 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-cdb6f4f5a9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:19.312737 kubelet[2423]: E0711 07:41:19.312571 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-cdb6f4f5a9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:19.314654 kubelet[2423]: W0711 07:41:19.314621 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:19.314948 kubelet[2423]: E0711 07:41:19.314719 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:19.316835 kubelet[2423]: I0711 07:41:19.315675 2423 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 07:41:19.316835 kubelet[2423]: I0711 07:41:19.316647 2423 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 07:41:19.317722 kubelet[2423]: W0711 07:41:19.317708 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 07:41:19.322443 kubelet[2423]: I0711 07:41:19.322422 2423 server.go:1274] "Started kubelet" Jul 11 07:41:19.329485 kubelet[2423]: I0711 07:41:19.329377 2423 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 07:41:19.342966 kubelet[2423]: I0711 07:41:19.331846 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 07:41:19.343693 kubelet[2423]: I0711 07:41:19.343667 2423 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 07:41:19.348989 kubelet[2423]: I0711 07:41:19.348921 2423 server.go:449] "Adding debug handlers to kubelet server" Jul 11 07:41:19.350302 kubelet[2423]: I0711 07:41:19.350281 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 07:41:19.352800 kubelet[2423]: I0711 07:41:19.352776 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 07:41:19.356057 kubelet[2423]: E0711 07:41:19.353418 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.223:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4392-0-0-n-cdb6f4f5a9.novalocal.18512283c275635d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:ci-4392-0-0-n-cdb6f4f5a9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:41:19.322366813 +0000 UTC m=+1.752544111,LastTimestamp:2025-07-11 07:41:19.322366813 +0000 UTC m=+1.752544111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:41:19.360218 kubelet[2423]: E0711 07:41:19.360165 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" Jul 11 07:41:19.360389 kubelet[2423]: I0711 07:41:19.360368 2423 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 07:41:19.361259 kubelet[2423]: I0711 07:41:19.360641 2423 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 07:41:19.361259 kubelet[2423]: I0711 07:41:19.360771 2423 reconciler.go:26] "Reconciler: start to sync state" Jul 11 07:41:19.361485 kubelet[2423]: W0711 07:41:19.361416 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:19.361578 kubelet[2423]: E0711 07:41:19.361499 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:19.361737 kubelet[2423]: I0711 07:41:19.361712 2423 factory.go:221] Registration of the systemd container factory successfully Jul 11 07:41:19.361839 kubelet[2423]: I0711 07:41:19.361813 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 07:41:19.363331 kubelet[2423]: E0711 07:41:19.363273 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": dial tcp 172.24.4.223:6443: connect: connection refused" interval="200ms" Jul 11 07:41:19.363472 kubelet[2423]: I0711 07:41:19.363438 2423 factory.go:221] Registration of the containerd container factory successfully Jul 11 07:41:19.385272 kubelet[2423]: E0711 07:41:19.385234 2423 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 07:41:19.388094 kubelet[2423]: I0711 07:41:19.388060 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 07:41:19.390441 kubelet[2423]: I0711 07:41:19.390104 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 07:41:19.390441 kubelet[2423]: I0711 07:41:19.390155 2423 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 07:41:19.390441 kubelet[2423]: I0711 07:41:19.390194 2423 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 07:41:19.390441 kubelet[2423]: E0711 07:41:19.390267 2423 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 07:41:19.395298 kubelet[2423]: W0711 07:41:19.395255 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:19.395427 kubelet[2423]: E0711 07:41:19.395298 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:19.404838 kubelet[2423]: I0711 07:41:19.404812 2423 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 07:41:19.404838 kubelet[2423]: I0711 07:41:19.404833 2423 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 07:41:19.405022 kubelet[2423]: I0711 07:41:19.404860 2423 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:41:19.411207 kubelet[2423]: I0711 07:41:19.411142 2423 policy_none.go:49] "None policy: Start" Jul 11 07:41:19.412156 kubelet[2423]: I0711 07:41:19.412091 2423 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 07:41:19.412156 kubelet[2423]: I0711 07:41:19.412120 2423 state_mem.go:35] "Initializing new in-memory state store" Jul 11 07:41:19.427466 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 07:41:19.453917 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 07:41:19.457769 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 07:41:19.462241 kubelet[2423]: E0711 07:41:19.461767 2423 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" Jul 11 07:41:19.462241 kubelet[2423]: E0711 07:41:19.461893 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.223:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4392-0-0-n-cdb6f4f5a9.novalocal.18512283c275635d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:ci-4392-0-0-n-cdb6f4f5a9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:41:19.322366813 +0000 UTC m=+1.752544111,LastTimestamp:2025-07-11 07:41:19.322366813 +0000 UTC m=+1.752544111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:41:19.470229 kubelet[2423]: I0711 07:41:19.470185 2423 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 07:41:19.470565 kubelet[2423]: I0711 07:41:19.470535 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 07:41:19.470688 kubelet[2423]: I0711 07:41:19.470619 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 07:41:19.471797 kubelet[2423]: I0711 07:41:19.471549 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 07:41:19.474832 kubelet[2423]: E0711 07:41:19.474805 2423 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" Jul 11 07:41:19.511629 systemd[1]: Created slice kubepods-burstable-pod94dd2fdae141e91cb071209277979747.slice - libcontainer container kubepods-burstable-pod94dd2fdae141e91cb071209277979747.slice. Jul 11 07:41:19.536743 systemd[1]: Created slice kubepods-burstable-pod9d801a80cb49e408d2efc270d30c5fd8.slice - libcontainer container kubepods-burstable-pod9d801a80cb49e408d2efc270d30c5fd8.slice. Jul 11 07:41:19.553645 systemd[1]: Created slice kubepods-burstable-pod1b42591ea292e73e5775e231f0503337.slice - libcontainer container kubepods-burstable-pod1b42591ea292e73e5775e231f0503337.slice. Jul 11 07:41:19.562415 kubelet[2423]: I0711 07:41:19.562083 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-k8s-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562415 kubelet[2423]: I0711 07:41:19.562127 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562415 kubelet[2423]: I0711 07:41:19.562157 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-kubeconfig\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562415 kubelet[2423]: I0711 07:41:19.562178 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562645 kubelet[2423]: I0711 07:41:19.562199 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b42591ea292e73e5775e231f0503337-kubeconfig\") pod \"kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"1b42591ea292e73e5775e231f0503337\") " pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562645 kubelet[2423]: I0711 07:41:19.562217 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-ca-certs\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562645 kubelet[2423]: I0711 07:41:19.562236 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-k8s-certs\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562645 kubelet[2423]: I0711 07:41:19.562254 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.562645 kubelet[2423]: I0711 07:41:19.562272 2423 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-ca-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.564041 kubelet[2423]: E0711 07:41:19.564009 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": dial tcp 172.24.4.223:6443: connect: connection refused" interval="400ms" Jul 11 07:41:19.573198 kubelet[2423]: I0711 07:41:19.572837 2423 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.573516 kubelet[2423]: E0711 07:41:19.573281 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.223:6443/api/v1/nodes\": dial tcp 172.24.4.223:6443: connect: connection refused" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.777702 kubelet[2423]: I0711 07:41:19.777617 2423 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.778907 kubelet[2423]: E0711 07:41:19.778798 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.223:6443/api/v1/nodes\": dial tcp 172.24.4.223:6443: connect: connection refused" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:19.833506 containerd[1563]: time="2025-07-11T07:41:19.833216073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:94dd2fdae141e91cb071209277979747,Namespace:kube-system,Attempt:0,}" Jul 11 07:41:19.850230 containerd[1563]: time="2025-07-11T07:41:19.850065869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:9d801a80cb49e408d2efc270d30c5fd8,Namespace:kube-system,Attempt:0,}" Jul 11 07:41:19.871162 containerd[1563]: time="2025-07-11T07:41:19.870445353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:1b42591ea292e73e5775e231f0503337,Namespace:kube-system,Attempt:0,}" Jul 11 07:41:19.967744 kubelet[2423]: E0711 07:41:19.964747 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": dial tcp 172.24.4.223:6443: connect: connection refused" interval="800ms" Jul 11 07:41:19.971167 containerd[1563]: time="2025-07-11T07:41:19.971081747Z" level=info msg="connecting to shim 47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7" address="unix:///run/containerd/s/bf6993fd81bb4e5767d7dc7158403f17a1d5ffd45faccbfc31e3f4887b350cdb" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:20.050261 containerd[1563]: time="2025-07-11T07:41:20.050170640Z" level=info msg="connecting to shim 79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:20.061427 containerd[1563]: time="2025-07-11T07:41:20.061356789Z" level=info msg="connecting to shim a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:20.157285 systemd[1]: Started cri-containerd-79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5.scope - libcontainer container 79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5. Jul 11 07:41:20.163260 systemd[1]: Started cri-containerd-47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7.scope - libcontainer container 47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7. Jul 11 07:41:20.176232 systemd[1]: Started cri-containerd-a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68.scope - libcontainer container a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68. Jul 11 07:41:20.182200 kubelet[2423]: I0711 07:41:20.181710 2423 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:20.184931 kubelet[2423]: E0711 07:41:20.184783 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.223:6443/api/v1/nodes\": dial tcp 172.24.4.223:6443: connect: connection refused" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:20.343153 kubelet[2423]: W0711 07:41:20.343080 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-cdb6f4f5a9.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:20.343343 kubelet[2423]: E0711 07:41:20.343167 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4392-0-0-n-cdb6f4f5a9.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:20.405263 kubelet[2423]: W0711 07:41:20.404930 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:20.405263 kubelet[2423]: E0711 07:41:20.405195 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:20.542178 kubelet[2423]: W0711 07:41:20.541815 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:20.542457 kubelet[2423]: E0711 07:41:20.541961 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:20.765872 kubelet[2423]: E0711 07:41:20.765759 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": dial tcp 172.24.4.223:6443: connect: connection refused" interval="1.6s" Jul 11 07:41:20.824151 containerd[1563]: time="2025-07-11T07:41:20.823958366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:1b42591ea292e73e5775e231f0503337,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\"" Jul 11 07:41:20.825320 containerd[1563]: time="2025-07-11T07:41:20.825199855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:9d801a80cb49e408d2efc270d30c5fd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\"" Jul 11 07:41:20.830916 containerd[1563]: time="2025-07-11T07:41:20.830802416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal,Uid:94dd2fdae141e91cb071209277979747,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7\"" Jul 11 07:41:20.839301 containerd[1563]: time="2025-07-11T07:41:20.839207291Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 07:41:20.840816 containerd[1563]: time="2025-07-11T07:41:20.840187469Z" level=info msg="CreateContainer within sandbox \"47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 07:41:20.843332 containerd[1563]: time="2025-07-11T07:41:20.843226616Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 07:41:20.882930 containerd[1563]: time="2025-07-11T07:41:20.882787859Z" level=info msg="Container 37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:20.889549 containerd[1563]: time="2025-07-11T07:41:20.889334031Z" level=info msg="Container 14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:20.896687 containerd[1563]: time="2025-07-11T07:41:20.896532101Z" level=info msg="Container 60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:20.909811 containerd[1563]: time="2025-07-11T07:41:20.909721295Z" level=info msg="CreateContainer within sandbox \"47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\"" Jul 11 07:41:20.918172 containerd[1563]: time="2025-07-11T07:41:20.918110982Z" level=info msg="StartContainer for \"14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\"" Jul 11 07:41:20.920182 containerd[1563]: time="2025-07-11T07:41:20.918784568Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\"" Jul 11 07:41:20.923257 containerd[1563]: time="2025-07-11T07:41:20.919567264Z" level=info msg="connecting to shim 14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95" address="unix:///run/containerd/s/bf6993fd81bb4e5767d7dc7158403f17a1d5ffd45faccbfc31e3f4887b350cdb" protocol=ttrpc version=3 Jul 11 07:41:20.923257 containerd[1563]: time="2025-07-11T07:41:20.921510020Z" level=info msg="StartContainer for \"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\"" Jul 11 07:41:20.923482 containerd[1563]: time="2025-07-11T07:41:20.923328647Z" level=info msg="connecting to shim 37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:41:20.939471 containerd[1563]: time="2025-07-11T07:41:20.938637321Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\"" Jul 11 07:41:20.941293 containerd[1563]: time="2025-07-11T07:41:20.941226723Z" level=info msg="StartContainer for \"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\"" Jul 11 07:41:20.958445 containerd[1563]: time="2025-07-11T07:41:20.958344647Z" level=info msg="connecting to shim 60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:41:20.958889 kubelet[2423]: W0711 07:41:20.958303 2423 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.223:6443: connect: connection refused Jul 11 07:41:20.958889 kubelet[2423]: E0711 07:41:20.958825 2423 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.223:6443: connect: connection refused" logger="UnhandledError" Jul 11 07:41:20.983250 systemd[1]: Started cri-containerd-14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95.scope - libcontainer container 14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95. Jul 11 07:41:20.990085 kubelet[2423]: I0711 07:41:20.990053 2423 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:20.991113 kubelet[2423]: E0711 07:41:20.991078 2423 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.223:6443/api/v1/nodes\": dial tcp 172.24.4.223:6443: connect: connection refused" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:21.003135 systemd[1]: Started cri-containerd-37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420.scope - libcontainer container 37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420. Jul 11 07:41:21.011257 systemd[1]: Started cri-containerd-60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61.scope - libcontainer container 60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61. Jul 11 07:41:21.114295 containerd[1563]: time="2025-07-11T07:41:21.114055368Z" level=info msg="StartContainer for \"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" returns successfully" Jul 11 07:41:21.115141 containerd[1563]: time="2025-07-11T07:41:21.115074701Z" level=info msg="StartContainer for \"14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\" returns successfully" Jul 11 07:41:21.144595 containerd[1563]: time="2025-07-11T07:41:21.144421571Z" level=info msg="StartContainer for \"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" returns successfully" Jul 11 07:41:22.593997 kubelet[2423]: I0711 07:41:22.593907 2423 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:24.126810 kubelet[2423]: E0711 07:41:24.126758 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:24.145033 kubelet[2423]: I0711 07:41:24.143609 2423 kubelet_node_status.go:75] "Successfully registered node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:24.145033 kubelet[2423]: E0711 07:41:24.145032 2423 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" Jul 11 07:41:24.315860 kubelet[2423]: I0711 07:41:24.315787 2423 apiserver.go:52] "Watching apiserver" Jul 11 07:41:24.361532 kubelet[2423]: I0711 07:41:24.361465 2423 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 07:41:25.173299 kubelet[2423]: W0711 07:41:25.173200 2423 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:27.741385 systemd[1]: Reload requested from client PID 2696 ('systemctl') (unit session-9.scope)... Jul 11 07:41:27.743195 systemd[1]: Reloading... Jul 11 07:41:27.912016 zram_generator::config[2741]: No configuration found. Jul 11 07:41:28.071963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 07:41:28.190745 kubelet[2423]: W0711 07:41:28.190533 2423 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:28.291434 systemd[1]: Reloading finished in 547 ms. Jul 11 07:41:28.330290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:28.355929 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 07:41:28.356399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:28.356530 systemd[1]: kubelet.service: Consumed 2.429s CPU time, 131.5M memory peak. Jul 11 07:41:28.362733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 07:41:28.746656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 07:41:28.756407 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 07:41:29.035182 kubelet[2804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:41:29.035182 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 07:41:29.035182 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 07:41:29.035182 kubelet[2804]: I0711 07:41:29.034691 2804 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 07:41:29.050521 kubelet[2804]: I0711 07:41:29.050422 2804 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 07:41:29.050521 kubelet[2804]: I0711 07:41:29.050458 2804 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 07:41:29.051409 kubelet[2804]: I0711 07:41:29.051376 2804 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 07:41:29.055014 kubelet[2804]: I0711 07:41:29.054788 2804 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 07:41:29.059243 kubelet[2804]: I0711 07:41:29.059214 2804 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 07:41:29.076164 kubelet[2804]: I0711 07:41:29.075933 2804 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 07:41:29.086699 kubelet[2804]: I0711 07:41:29.086259 2804 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 07:41:29.088524 kubelet[2804]: I0711 07:41:29.088477 2804 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 07:41:29.088704 kubelet[2804]: I0711 07:41:29.088630 2804 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 07:41:29.088942 kubelet[2804]: I0711 07:41:29.088689 2804 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4392-0-0-n-cdb6f4f5a9.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 07:41:29.089241 kubelet[2804]: I0711 07:41:29.088965 2804 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 07:41:29.089241 kubelet[2804]: I0711 07:41:29.089016 2804 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 07:41:29.089241 kubelet[2804]: I0711 07:41:29.089105 2804 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:41:29.089358 kubelet[2804]: I0711 07:41:29.089320 2804 kubelet.go:408] "Attempting to sync node with API server" Jul 11 07:41:29.089358 kubelet[2804]: I0711 07:41:29.089337 2804 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 07:41:29.089434 kubelet[2804]: I0711 07:41:29.089384 2804 kubelet.go:314] "Adding apiserver pod source" Jul 11 07:41:29.089434 kubelet[2804]: I0711 07:41:29.089427 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 07:41:29.094840 kubelet[2804]: I0711 07:41:29.094792 2804 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 07:41:29.095662 kubelet[2804]: I0711 07:41:29.095590 2804 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 07:41:29.096503 kubelet[2804]: I0711 07:41:29.096468 2804 server.go:1274] "Started kubelet" Jul 11 07:41:29.116398 kubelet[2804]: I0711 07:41:29.116283 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 07:41:29.126764 kubelet[2804]: I0711 07:41:29.126592 2804 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 07:41:29.133775 kubelet[2804]: I0711 07:41:29.133656 2804 server.go:449] "Adding debug handlers to kubelet server" Jul 11 07:41:29.141789 kubelet[2804]: I0711 07:41:29.141734 2804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 07:41:29.144771 kubelet[2804]: I0711 07:41:29.143121 2804 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 07:41:29.144771 kubelet[2804]: I0711 07:41:29.143894 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 07:41:29.144890 kubelet[2804]: I0711 07:41:29.144824 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 07:41:29.147001 kubelet[2804]: I0711 07:41:29.146300 2804 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 07:41:29.147001 kubelet[2804]: E0711 07:41:29.146616 2804 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" not found" Jul 11 07:41:29.147549 kubelet[2804]: I0711 07:41:29.147522 2804 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 07:41:29.147717 kubelet[2804]: I0711 07:41:29.147695 2804 reconciler.go:26] "Reconciler: start to sync state" Jul 11 07:41:29.149274 kubelet[2804]: I0711 07:41:29.148115 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 07:41:29.150594 kubelet[2804]: I0711 07:41:29.150096 2804 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 07:41:29.150594 kubelet[2804]: I0711 07:41:29.150165 2804 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 07:41:29.150594 kubelet[2804]: E0711 07:41:29.150229 2804 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 07:41:29.174687 kubelet[2804]: I0711 07:41:29.174066 2804 factory.go:221] Registration of the systemd container factory successfully Jul 11 07:41:29.174687 kubelet[2804]: I0711 07:41:29.174198 2804 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 07:41:29.191005 kubelet[2804]: E0711 07:41:29.190828 2804 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 07:41:29.192055 kubelet[2804]: I0711 07:41:29.192034 2804 factory.go:221] Registration of the containerd container factory successfully Jul 11 07:41:29.251692 kubelet[2804]: E0711 07:41:29.251647 2804 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.281490 2804 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.281511 2804 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.281623 2804 state_mem.go:36] "Initialized new in-memory state store" Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.281903 2804 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.281917 2804 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 07:41:29.282112 kubelet[2804]: I0711 07:41:29.282012 2804 policy_none.go:49] "None policy: Start" Jul 11 07:41:29.283635 kubelet[2804]: I0711 07:41:29.283266 2804 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 07:41:29.283635 kubelet[2804]: I0711 07:41:29.283357 2804 state_mem.go:35] "Initializing new in-memory state store" Jul 11 07:41:29.283635 kubelet[2804]: I0711 07:41:29.283573 2804 state_mem.go:75] "Updated machine memory state" Jul 11 07:41:29.290075 kubelet[2804]: I0711 07:41:29.289712 2804 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 07:41:29.290075 kubelet[2804]: I0711 07:41:29.289921 2804 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 07:41:29.290075 kubelet[2804]: I0711 07:41:29.289948 2804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 07:41:29.299184 kubelet[2804]: I0711 07:41:29.298331 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 07:41:29.421022 kubelet[2804]: I0711 07:41:29.420262 2804 kubelet_node_status.go:72] "Attempting to register node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.436593 kubelet[2804]: I0711 07:41:29.436352 2804 kubelet_node_status.go:111] "Node was previously registered" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.437116 kubelet[2804]: I0711 07:41:29.436940 2804 kubelet_node_status.go:75] "Successfully registered node" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.464424 kubelet[2804]: W0711 07:41:29.464274 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:29.464618 kubelet[2804]: E0711 07:41:29.464500 2804 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.468892 kubelet[2804]: W0711 07:41:29.468840 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:29.474183 kubelet[2804]: W0711 07:41:29.473447 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:29.474183 kubelet[2804]: E0711 07:41:29.473518 2804 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.550582 kubelet[2804]: I0711 07:41:29.549863 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-ca-certs\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.550582 kubelet[2804]: I0711 07:41:29.549934 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-k8s-certs\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551212 kubelet[2804]: I0711 07:41:29.551156 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-flexvolume-dir\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551290 kubelet[2804]: I0711 07:41:29.551229 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-k8s-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551290 kubelet[2804]: I0711 07:41:29.551275 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94dd2fdae141e91cb071209277979747-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"94dd2fdae141e91cb071209277979747\") " pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551396 kubelet[2804]: I0711 07:41:29.551309 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-ca-certs\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551396 kubelet[2804]: I0711 07:41:29.551346 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-kubeconfig\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.551396 kubelet[2804]: I0711 07:41:29.551381 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d801a80cb49e408d2efc270d30c5fd8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"9d801a80cb49e408d2efc270d30c5fd8\") " pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:29.552858 kubelet[2804]: I0711 07:41:29.551417 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b42591ea292e73e5775e231f0503337-kubeconfig\") pod \"kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" (UID: \"1b42591ea292e73e5775e231f0503337\") " pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:30.119190 kubelet[2804]: I0711 07:41:30.119094 2804 apiserver.go:52] "Watching apiserver" Jul 11 07:41:30.149011 kubelet[2804]: I0711 07:41:30.148895 2804 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 07:41:30.248243 kubelet[2804]: W0711 07:41:30.248195 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:30.248426 kubelet[2804]: E0711 07:41:30.248285 2804 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:30.251965 kubelet[2804]: W0711 07:41:30.251930 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 11 07:41:30.252058 kubelet[2804]: E0711 07:41:30.252002 2804 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:41:30.318245 kubelet[2804]: I0711 07:41:30.318137 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podStartSLOduration=2.318068369 podStartE2EDuration="2.318068369s" podCreationTimestamp="2025-07-11 07:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:41:30.294057535 +0000 UTC m=+1.526986621" watchObservedRunningTime="2025-07-11 07:41:30.318068369 +0000 UTC m=+1.550997445" Jul 11 07:41:30.346001 kubelet[2804]: I0711 07:41:30.344921 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podStartSLOduration=5.344900606 podStartE2EDuration="5.344900606s" podCreationTimestamp="2025-07-11 07:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:41:30.320999875 +0000 UTC m=+1.553928951" watchObservedRunningTime="2025-07-11 07:41:30.344900606 +0000 UTC m=+1.577829682" Jul 11 07:41:30.383989 kubelet[2804]: I0711 07:41:30.383809 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podStartSLOduration=1.383793386 podStartE2EDuration="1.383793386s" podCreationTimestamp="2025-07-11 07:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:41:30.346398283 +0000 UTC m=+1.579327359" watchObservedRunningTime="2025-07-11 07:41:30.383793386 +0000 UTC m=+1.616722462" Jul 11 07:41:32.719666 kubelet[2804]: I0711 07:41:32.719567 2804 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 07:41:32.721204 kubelet[2804]: I0711 07:41:32.721153 2804 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 07:41:32.721619 containerd[1563]: time="2025-07-11T07:41:32.720096264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 07:41:33.739229 systemd[1]: Created slice kubepods-besteffort-pod5709a52b_ab7a_42dc_ab85_1ee3b95ca334.slice - libcontainer container kubepods-besteffort-pod5709a52b_ab7a_42dc_ab85_1ee3b95ca334.slice. Jul 11 07:41:33.828704 kubelet[2804]: I0711 07:41:33.828510 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5709a52b-ab7a-42dc-ab85-1ee3b95ca334-xtables-lock\") pod \"kube-proxy-fvbvg\" (UID: \"5709a52b-ab7a-42dc-ab85-1ee3b95ca334\") " pod="kube-system/kube-proxy-fvbvg" Jul 11 07:41:33.829537 kubelet[2804]: I0711 07:41:33.828771 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4zd\" (UniqueName: \"kubernetes.io/projected/5709a52b-ab7a-42dc-ab85-1ee3b95ca334-kube-api-access-6v4zd\") pod \"kube-proxy-fvbvg\" (UID: \"5709a52b-ab7a-42dc-ab85-1ee3b95ca334\") " pod="kube-system/kube-proxy-fvbvg" Jul 11 07:41:33.829537 kubelet[2804]: I0711 07:41:33.828830 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5709a52b-ab7a-42dc-ab85-1ee3b95ca334-kube-proxy\") pod \"kube-proxy-fvbvg\" (UID: \"5709a52b-ab7a-42dc-ab85-1ee3b95ca334\") " pod="kube-system/kube-proxy-fvbvg" Jul 11 07:41:33.829537 kubelet[2804]: I0711 07:41:33.828856 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5709a52b-ab7a-42dc-ab85-1ee3b95ca334-lib-modules\") pod \"kube-proxy-fvbvg\" (UID: \"5709a52b-ab7a-42dc-ab85-1ee3b95ca334\") " pod="kube-system/kube-proxy-fvbvg" Jul 11 07:41:33.898520 systemd[1]: Created slice kubepods-besteffort-podb888df97_3c70_41ba_a3f5_7ac75508eb3b.slice - libcontainer container kubepods-besteffort-podb888df97_3c70_41ba_a3f5_7ac75508eb3b.slice. Jul 11 07:41:34.031336 kubelet[2804]: I0711 07:41:34.030441 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcpw\" (UniqueName: \"kubernetes.io/projected/b888df97-3c70-41ba-a3f5-7ac75508eb3b-kube-api-access-qbcpw\") pod \"tigera-operator-5bf8dfcb4-mplsp\" (UID: \"b888df97-3c70-41ba-a3f5-7ac75508eb3b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" Jul 11 07:41:34.031336 kubelet[2804]: I0711 07:41:34.030527 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b888df97-3c70-41ba-a3f5-7ac75508eb3b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-mplsp\" (UID: \"b888df97-3c70-41ba-a3f5-7ac75508eb3b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" Jul 11 07:41:34.053696 containerd[1563]: time="2025-07-11T07:41:34.053498423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvbvg,Uid:5709a52b-ab7a-42dc-ab85-1ee3b95ca334,Namespace:kube-system,Attempt:0,}" Jul 11 07:41:34.157066 containerd[1563]: time="2025-07-11T07:41:34.156001092Z" level=info msg="connecting to shim c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f" address="unix:///run/containerd/s/d96d1381132fb54185c1439d465267673d82f09139bdfa45108dc21c86b3974b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:34.205656 containerd[1563]: time="2025-07-11T07:41:34.205612397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mplsp,Uid:b888df97-3c70-41ba-a3f5-7ac75508eb3b,Namespace:tigera-operator,Attempt:0,}" Jul 11 07:41:34.208400 systemd[1]: Started cri-containerd-c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f.scope - libcontainer container c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f. Jul 11 07:41:34.344571 containerd[1563]: time="2025-07-11T07:41:34.344495353Z" level=info msg="connecting to shim d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:34.353227 containerd[1563]: time="2025-07-11T07:41:34.353118715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvbvg,Uid:5709a52b-ab7a-42dc-ab85-1ee3b95ca334,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f\"" Jul 11 07:41:34.360841 containerd[1563]: time="2025-07-11T07:41:34.360700130Z" level=info msg="CreateContainer within sandbox \"c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 07:41:34.384990 containerd[1563]: time="2025-07-11T07:41:34.384841343Z" level=info msg="Container 49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:34.387166 systemd[1]: Started cri-containerd-d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6.scope - libcontainer container d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6. Jul 11 07:41:34.415329 containerd[1563]: time="2025-07-11T07:41:34.415262340Z" level=info msg="CreateContainer within sandbox \"c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3\"" Jul 11 07:41:34.417458 containerd[1563]: time="2025-07-11T07:41:34.417317190Z" level=info msg="StartContainer for \"49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3\"" Jul 11 07:41:34.420919 containerd[1563]: time="2025-07-11T07:41:34.420767666Z" level=info msg="connecting to shim 49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3" address="unix:///run/containerd/s/d96d1381132fb54185c1439d465267673d82f09139bdfa45108dc21c86b3974b" protocol=ttrpc version=3 Jul 11 07:41:34.451168 systemd[1]: Started cri-containerd-49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3.scope - libcontainer container 49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3. Jul 11 07:41:34.476313 containerd[1563]: time="2025-07-11T07:41:34.476157334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mplsp,Uid:b888df97-3c70-41ba-a3f5-7ac75508eb3b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\"" Jul 11 07:41:34.480953 containerd[1563]: time="2025-07-11T07:41:34.480803031Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 07:41:34.524924 containerd[1563]: time="2025-07-11T07:41:34.524870108Z" level=info msg="StartContainer for \"49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3\" returns successfully" Jul 11 07:41:35.358842 kubelet[2804]: I0711 07:41:35.356940 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fvbvg" podStartSLOduration=2.356904316 podStartE2EDuration="2.356904316s" podCreationTimestamp="2025-07-11 07:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:41:35.356757764 +0000 UTC m=+6.589686900" watchObservedRunningTime="2025-07-11 07:41:35.356904316 +0000 UTC m=+6.589833402" Jul 11 07:41:36.290528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830156372.mount: Deactivated successfully. Jul 11 07:41:37.958343 containerd[1563]: time="2025-07-11T07:41:37.958116042Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:37.960781 containerd[1563]: time="2025-07-11T07:41:37.960733025Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 07:41:37.962193 containerd[1563]: time="2025-07-11T07:41:37.962125267Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:37.966509 containerd[1563]: time="2025-07-11T07:41:37.966449249Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:37.967849 containerd[1563]: time="2025-07-11T07:41:37.967554767Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.486353355s" Jul 11 07:41:37.967849 containerd[1563]: time="2025-07-11T07:41:37.967637571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 07:41:37.973164 containerd[1563]: time="2025-07-11T07:41:37.972730885Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 07:41:37.993583 containerd[1563]: time="2025-07-11T07:41:37.992207953Z" level=info msg="Container 2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:38.000674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761157003.mount: Deactivated successfully. Jul 11 07:41:38.017808 containerd[1563]: time="2025-07-11T07:41:38.017662173Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\"" Jul 11 07:41:38.019303 containerd[1563]: time="2025-07-11T07:41:38.019112655Z" level=info msg="StartContainer for \"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\"" Jul 11 07:41:38.023613 containerd[1563]: time="2025-07-11T07:41:38.023347434Z" level=info msg="connecting to shim 2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:41:38.068188 systemd[1]: Started cri-containerd-2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584.scope - libcontainer container 2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584. Jul 11 07:41:38.138786 containerd[1563]: time="2025-07-11T07:41:38.138642965Z" level=info msg="StartContainer for \"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" returns successfully" Jul 11 07:41:40.909802 kubelet[2804]: I0711 07:41:40.909411 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podStartSLOduration=4.417398057 podStartE2EDuration="7.90925943s" podCreationTimestamp="2025-07-11 07:41:33 +0000 UTC" firstStartedPulling="2025-07-11 07:41:34.477746428 +0000 UTC m=+5.710675504" lastFinishedPulling="2025-07-11 07:41:37.969607801 +0000 UTC m=+9.202536877" observedRunningTime="2025-07-11 07:41:38.310630907 +0000 UTC m=+9.543559994" watchObservedRunningTime="2025-07-11 07:41:40.90925943 +0000 UTC m=+12.142188516" Jul 11 07:41:46.964271 sudo[1813]: pam_unix(sudo:session): session closed for user root Jul 11 07:41:47.239256 sshd[1812]: Connection closed by 172.24.4.1 port 37472 Jul 11 07:41:47.242713 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jul 11 07:41:47.258651 systemd[1]: sshd@6-172.24.4.223:22-172.24.4.1:37472.service: Deactivated successfully. Jul 11 07:41:47.269802 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 07:41:47.272161 systemd[1]: session-9.scope: Consumed 9.197s CPU time, 229M memory peak. Jul 11 07:41:47.276254 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Jul 11 07:41:47.280240 systemd-logind[1532]: Removed session 9. Jul 11 07:41:51.496863 systemd[1]: Created slice kubepods-besteffort-pod6baa22a7_acb9_4e1c_9b85_77cbb0c26d6c.slice - libcontainer container kubepods-besteffort-pod6baa22a7_acb9_4e1c_9b85_77cbb0c26d6c.slice. Jul 11 07:41:51.638006 kubelet[2804]: I0711 07:41:51.637920 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrv6\" (UniqueName: \"kubernetes.io/projected/6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c-kube-api-access-qnrv6\") pod \"calico-typha-b49cd5fd5-nms9w\" (UID: \"6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c\") " pod="calico-system/calico-typha-b49cd5fd5-nms9w" Jul 11 07:41:51.639014 kubelet[2804]: I0711 07:41:51.638857 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c-tigera-ca-bundle\") pod \"calico-typha-b49cd5fd5-nms9w\" (UID: \"6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c\") " pod="calico-system/calico-typha-b49cd5fd5-nms9w" Jul 11 07:41:51.639014 kubelet[2804]: I0711 07:41:51.638940 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c-typha-certs\") pod \"calico-typha-b49cd5fd5-nms9w\" (UID: \"6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c\") " pod="calico-system/calico-typha-b49cd5fd5-nms9w" Jul 11 07:41:51.813845 containerd[1563]: time="2025-07-11T07:41:51.813643034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b49cd5fd5-nms9w,Uid:6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c,Namespace:calico-system,Attempt:0,}" Jul 11 07:41:51.904881 systemd[1]: Created slice kubepods-besteffort-podc8802e39_a710_44a5_b1a8_b2900b47d2ca.slice - libcontainer container kubepods-besteffort-podc8802e39_a710_44a5_b1a8_b2900b47d2ca.slice. Jul 11 07:41:51.912444 containerd[1563]: time="2025-07-11T07:41:51.912356642Z" level=info msg="connecting to shim 1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9" address="unix:///run/containerd/s/b3b9d97d0c93fb9f3e7d88877bf328cfa83bdfe042de6128a61cd28d18604bfd" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:52.044188 kubelet[2804]: I0711 07:41:52.044125 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-flexvol-driver-host\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.044892 kubelet[2804]: I0711 07:41:52.044414 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-policysync\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.044892 kubelet[2804]: I0711 07:41:52.044476 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-xtables-lock\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.044892 kubelet[2804]: I0711 07:41:52.044503 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-var-lib-calico\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.044892 kubelet[2804]: I0711 07:41:52.044540 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8802e39-a710-44a5-b1a8-b2900b47d2ca-tigera-ca-bundle\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.044892 kubelet[2804]: I0711 07:41:52.044583 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-cni-log-dir\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045142 kubelet[2804]: I0711 07:41:52.044603 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-var-run-calico\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045142 kubelet[2804]: I0711 07:41:52.044624 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c8802e39-a710-44a5-b1a8-b2900b47d2ca-node-certs\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045142 kubelet[2804]: I0711 07:41:52.044642 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwqz\" (UniqueName: \"kubernetes.io/projected/c8802e39-a710-44a5-b1a8-b2900b47d2ca-kube-api-access-nnwqz\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045142 kubelet[2804]: I0711 07:41:52.044673 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-cni-net-dir\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045142 kubelet[2804]: I0711 07:41:52.044722 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-cni-bin-dir\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.045307 kubelet[2804]: I0711 07:41:52.044751 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8802e39-a710-44a5-b1a8-b2900b47d2ca-lib-modules\") pod \"calico-node-m2tf8\" (UID: \"c8802e39-a710-44a5-b1a8-b2900b47d2ca\") " pod="calico-system/calico-node-m2tf8" Jul 11 07:41:52.060945 systemd[1]: Started cri-containerd-1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9.scope - libcontainer container 1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9. Jul 11 07:41:52.109098 kubelet[2804]: E0711 07:41:52.108682 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:41:52.163257 kubelet[2804]: E0711 07:41:52.158010 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.163257 kubelet[2804]: W0711 07:41:52.158060 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.163257 kubelet[2804]: E0711 07:41:52.158109 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.164129 kubelet[2804]: E0711 07:41:52.164038 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.164129 kubelet[2804]: W0711 07:41:52.164073 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.164129 kubelet[2804]: E0711 07:41:52.164099 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.169233 kubelet[2804]: E0711 07:41:52.169092 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.169233 kubelet[2804]: W0711 07:41:52.169150 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.169233 kubelet[2804]: E0711 07:41:52.169175 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.172978 kubelet[2804]: E0711 07:41:52.172899 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.172978 kubelet[2804]: W0711 07:41:52.173011 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.173517 kubelet[2804]: E0711 07:41:52.173031 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.173935 kubelet[2804]: E0711 07:41:52.173917 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.174228 kubelet[2804]: W0711 07:41:52.174102 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.174228 kubelet[2804]: E0711 07:41:52.174117 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.174717 kubelet[2804]: E0711 07:41:52.174604 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.174717 kubelet[2804]: W0711 07:41:52.174614 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.174717 kubelet[2804]: E0711 07:41:52.174641 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.175243 kubelet[2804]: E0711 07:41:52.175196 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.175243 kubelet[2804]: W0711 07:41:52.175208 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.175840 kubelet[2804]: E0711 07:41:52.175671 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.176087 kubelet[2804]: E0711 07:41:52.176029 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.176087 kubelet[2804]: W0711 07:41:52.176042 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.176087 kubelet[2804]: E0711 07:41:52.176052 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.176564 kubelet[2804]: E0711 07:41:52.176447 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.176564 kubelet[2804]: W0711 07:41:52.176514 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.176564 kubelet[2804]: E0711 07:41:52.176528 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.177100 kubelet[2804]: E0711 07:41:52.176946 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.177100 kubelet[2804]: W0711 07:41:52.176958 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.177354 kubelet[2804]: E0711 07:41:52.177218 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.177814 kubelet[2804]: E0711 07:41:52.177736 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.177814 kubelet[2804]: W0711 07:41:52.177753 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.177814 kubelet[2804]: E0711 07:41:52.177767 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.178547 kubelet[2804]: E0711 07:41:52.178390 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.178547 kubelet[2804]: W0711 07:41:52.178495 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.178547 kubelet[2804]: E0711 07:41:52.178507 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.179005 kubelet[2804]: E0711 07:41:52.178937 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.179005 kubelet[2804]: W0711 07:41:52.178949 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.179005 kubelet[2804]: E0711 07:41:52.178959 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.179676 kubelet[2804]: E0711 07:41:52.179575 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.179676 kubelet[2804]: W0711 07:41:52.179588 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.179676 kubelet[2804]: E0711 07:41:52.179598 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.180322 kubelet[2804]: E0711 07:41:52.180106 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.180322 kubelet[2804]: W0711 07:41:52.180138 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.180322 kubelet[2804]: E0711 07:41:52.180150 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.181641 kubelet[2804]: E0711 07:41:52.181520 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.181641 kubelet[2804]: W0711 07:41:52.181550 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.181641 kubelet[2804]: E0711 07:41:52.181563 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.181941 kubelet[2804]: E0711 07:41:52.181927 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.182302 kubelet[2804]: W0711 07:41:52.182165 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.184309 kubelet[2804]: E0711 07:41:52.184076 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.185559 kubelet[2804]: E0711 07:41:52.185335 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.186591 kubelet[2804]: W0711 07:41:52.186270 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.186591 kubelet[2804]: E0711 07:41:52.186338 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.188733 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.189484 kubelet[2804]: W0711 07:41:52.188771 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.188798 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.189157 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.189484 kubelet[2804]: W0711 07:41:52.189169 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.189180 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.189389 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.189484 kubelet[2804]: W0711 07:41:52.189399 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.189484 kubelet[2804]: E0711 07:41:52.189411 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.191942 kubelet[2804]: E0711 07:41:52.189782 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.191942 kubelet[2804]: W0711 07:41:52.189793 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.191942 kubelet[2804]: E0711 07:41:52.189804 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.191942 kubelet[2804]: E0711 07:41:52.190442 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.191942 kubelet[2804]: W0711 07:41:52.190452 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.191942 kubelet[2804]: E0711 07:41:52.190463 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.202566 kubelet[2804]: E0711 07:41:52.202478 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.203172 kubelet[2804]: W0711 07:41:52.202522 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.203623 kubelet[2804]: E0711 07:41:52.203289 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.211952 containerd[1563]: time="2025-07-11T07:41:52.211890275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2tf8,Uid:c8802e39-a710-44a5-b1a8-b2900b47d2ca,Namespace:calico-system,Attempt:0,}" Jul 11 07:41:52.254101 kubelet[2804]: E0711 07:41:52.253365 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.254101 kubelet[2804]: W0711 07:41:52.253518 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.254101 kubelet[2804]: E0711 07:41:52.253548 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.254101 kubelet[2804]: I0711 07:41:52.253680 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c88e405-5760-45b1-ac61-26a4ddd63df5-registration-dir\") pod \"csi-node-driver-vlrrv\" (UID: \"3c88e405-5760-45b1-ac61-26a4ddd63df5\") " pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:41:52.255176 kubelet[2804]: E0711 07:41:52.255046 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.255176 kubelet[2804]: W0711 07:41:52.255063 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.255176 kubelet[2804]: E0711 07:41:52.255104 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.255176 kubelet[2804]: I0711 07:41:52.255127 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c88e405-5760-45b1-ac61-26a4ddd63df5-kubelet-dir\") pod \"csi-node-driver-vlrrv\" (UID: \"3c88e405-5760-45b1-ac61-26a4ddd63df5\") " pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:41:52.256844 kubelet[2804]: E0711 07:41:52.256784 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.257039 kubelet[2804]: W0711 07:41:52.256926 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.257271 kubelet[2804]: E0711 07:41:52.257104 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.257271 kubelet[2804]: I0711 07:41:52.257164 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c88e405-5760-45b1-ac61-26a4ddd63df5-varrun\") pod \"csi-node-driver-vlrrv\" (UID: \"3c88e405-5760-45b1-ac61-26a4ddd63df5\") " pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:41:52.257877 kubelet[2804]: E0711 07:41:52.257861 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.258251 kubelet[2804]: W0711 07:41:52.258041 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.258251 kubelet[2804]: E0711 07:41:52.258088 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.260190 kubelet[2804]: E0711 07:41:52.259495 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.260190 kubelet[2804]: W0711 07:41:52.259510 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.260190 kubelet[2804]: E0711 07:41:52.259550 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.260845 kubelet[2804]: E0711 07:41:52.260827 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.261388 kubelet[2804]: W0711 07:41:52.261352 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.261647 kubelet[2804]: E0711 07:41:52.261569 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.261707 kubelet[2804]: I0711 07:41:52.261617 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8swg\" (UniqueName: \"kubernetes.io/projected/3c88e405-5760-45b1-ac61-26a4ddd63df5-kube-api-access-x8swg\") pod \"csi-node-driver-vlrrv\" (UID: \"3c88e405-5760-45b1-ac61-26a4ddd63df5\") " pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:41:52.262477 kubelet[2804]: E0711 07:41:52.262436 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.262643 kubelet[2804]: W0711 07:41:52.262614 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.263046 kubelet[2804]: E0711 07:41:52.263016 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.264788 kubelet[2804]: E0711 07:41:52.264756 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.264788 kubelet[2804]: W0711 07:41:52.264776 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.264788 kubelet[2804]: E0711 07:41:52.264794 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.265907 kubelet[2804]: E0711 07:41:52.265195 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.265907 kubelet[2804]: W0711 07:41:52.265210 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.265907 kubelet[2804]: E0711 07:41:52.265220 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.271817 kubelet[2804]: E0711 07:41:52.271535 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.271817 kubelet[2804]: W0711 07:41:52.271647 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.271817 kubelet[2804]: E0711 07:41:52.271678 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.274359 kubelet[2804]: E0711 07:41:52.274073 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.274359 kubelet[2804]: W0711 07:41:52.274094 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.274359 kubelet[2804]: E0711 07:41:52.274136 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.276001 kubelet[2804]: E0711 07:41:52.274790 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.276001 kubelet[2804]: W0711 07:41:52.274950 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.276001 kubelet[2804]: E0711 07:41:52.275070 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.276686 kubelet[2804]: E0711 07:41:52.276497 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.276686 kubelet[2804]: W0711 07:41:52.276510 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.276686 kubelet[2804]: E0711 07:41:52.276522 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.276686 kubelet[2804]: I0711 07:41:52.276555 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c88e405-5760-45b1-ac61-26a4ddd63df5-socket-dir\") pod \"csi-node-driver-vlrrv\" (UID: \"3c88e405-5760-45b1-ac61-26a4ddd63df5\") " pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:41:52.278469 kubelet[2804]: E0711 07:41:52.278162 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.278469 kubelet[2804]: W0711 07:41:52.278252 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.278469 kubelet[2804]: E0711 07:41:52.278271 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.279379 kubelet[2804]: E0711 07:41:52.279021 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.279379 kubelet[2804]: W0711 07:41:52.279035 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.279379 kubelet[2804]: E0711 07:41:52.279046 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.280753 containerd[1563]: time="2025-07-11T07:41:52.280653829Z" level=info msg="connecting to shim 42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396" address="unix:///run/containerd/s/60c0fd3a759bb70ae6046dcc06a1719d809c34b79172f87e19512126f5b62b26" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:41:52.337672 systemd[1]: Started cri-containerd-42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396.scope - libcontainer container 42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396. Jul 11 07:41:52.385419 kubelet[2804]: E0711 07:41:52.385094 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.385419 kubelet[2804]: W0711 07:41:52.385125 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.385419 kubelet[2804]: E0711 07:41:52.385149 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.385419 kubelet[2804]: E0711 07:41:52.385358 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.385419 kubelet[2804]: W0711 07:41:52.385372 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.385419 kubelet[2804]: E0711 07:41:52.385384 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.386996 kubelet[2804]: E0711 07:41:52.386775 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.386996 kubelet[2804]: W0711 07:41:52.386787 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.386996 kubelet[2804]: E0711 07:41:52.386799 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.387113 kubelet[2804]: E0711 07:41:52.387023 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.387113 kubelet[2804]: W0711 07:41:52.387036 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.387113 kubelet[2804]: E0711 07:41:52.387047 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.389640 kubelet[2804]: E0711 07:41:52.389562 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.389640 kubelet[2804]: W0711 07:41:52.389582 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.389640 kubelet[2804]: E0711 07:41:52.389606 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.390998 kubelet[2804]: E0711 07:41:52.389900 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.390998 kubelet[2804]: W0711 07:41:52.389917 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.390998 kubelet[2804]: E0711 07:41:52.389928 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.390998 kubelet[2804]: E0711 07:41:52.390116 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.390998 kubelet[2804]: W0711 07:41:52.390127 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.390998 kubelet[2804]: E0711 07:41:52.390136 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.391746 kubelet[2804]: E0711 07:41:52.391718 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.391746 kubelet[2804]: W0711 07:41:52.391738 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.391746 kubelet[2804]: E0711 07:41:52.391752 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.392141 kubelet[2804]: E0711 07:41:52.392118 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.392141 kubelet[2804]: W0711 07:41:52.392135 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.392234 kubelet[2804]: E0711 07:41:52.392149 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392394 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.393229 kubelet[2804]: W0711 07:41:52.392452 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392467 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392644 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.393229 kubelet[2804]: W0711 07:41:52.392655 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392667 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392917 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.393229 kubelet[2804]: W0711 07:41:52.392928 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.393229 kubelet[2804]: E0711 07:41:52.392941 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.394520 kubelet[2804]: E0711 07:41:52.394438 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.394904 kubelet[2804]: W0711 07:41:52.394885 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.395142 kubelet[2804]: E0711 07:41:52.395039 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.395520 kubelet[2804]: E0711 07:41:52.395504 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.395614 kubelet[2804]: W0711 07:41:52.395600 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.395857 kubelet[2804]: E0711 07:41:52.395790 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.396629 kubelet[2804]: E0711 07:41:52.396612 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.396717 kubelet[2804]: W0711 07:41:52.396702 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.396857 kubelet[2804]: E0711 07:41:52.396806 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.397220 kubelet[2804]: E0711 07:41:52.397126 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.397620 kubelet[2804]: W0711 07:41:52.397308 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.397620 kubelet[2804]: E0711 07:41:52.397366 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.399003 kubelet[2804]: E0711 07:41:52.398058 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.399003 kubelet[2804]: W0711 07:41:52.398073 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.399003 kubelet[2804]: E0711 07:41:52.398110 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.399314 kubelet[2804]: E0711 07:41:52.399203 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.399314 kubelet[2804]: W0711 07:41:52.399217 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.399314 kubelet[2804]: E0711 07:41:52.399271 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.399637 kubelet[2804]: E0711 07:41:52.399531 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.399637 kubelet[2804]: W0711 07:41:52.399545 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.399637 kubelet[2804]: E0711 07:41:52.399598 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.399825 kubelet[2804]: E0711 07:41:52.399801 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.400026 kubelet[2804]: W0711 07:41:52.400010 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.400440 kubelet[2804]: E0711 07:41:52.400181 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.400570 kubelet[2804]: E0711 07:41:52.400555 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.400636 kubelet[2804]: W0711 07:41:52.400624 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.400738 kubelet[2804]: E0711 07:41:52.400720 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.401252 kubelet[2804]: E0711 07:41:52.401228 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.401252 kubelet[2804]: W0711 07:41:52.401247 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.401355 kubelet[2804]: E0711 07:41:52.401283 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.401575 kubelet[2804]: E0711 07:41:52.401499 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.401575 kubelet[2804]: W0711 07:41:52.401516 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.401770 kubelet[2804]: E0711 07:41:52.401648 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.401770 kubelet[2804]: E0711 07:41:52.401720 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.401770 kubelet[2804]: W0711 07:41:52.401729 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.401770 kubelet[2804]: E0711 07:41:52.401739 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.402093 kubelet[2804]: E0711 07:41:52.401916 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.402093 kubelet[2804]: W0711 07:41:52.401933 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.402093 kubelet[2804]: E0711 07:41:52.401944 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.415814 kubelet[2804]: E0711 07:41:52.415769 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:52.415814 kubelet[2804]: W0711 07:41:52.415796 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:52.415814 kubelet[2804]: E0711 07:41:52.415816 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:52.480786 containerd[1563]: time="2025-07-11T07:41:52.480635492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b49cd5fd5-nms9w,Uid:6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9\"" Jul 11 07:41:52.488809 containerd[1563]: time="2025-07-11T07:41:52.487281152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 07:41:52.489120 containerd[1563]: time="2025-07-11T07:41:52.489032809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2tf8,Uid:c8802e39-a710-44a5-b1a8-b2900b47d2ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\"" Jul 11 07:41:54.152031 kubelet[2804]: E0711 07:41:54.151662 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:41:54.523621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956545793.mount: Deactivated successfully. Jul 11 07:41:56.151481 kubelet[2804]: E0711 07:41:56.151376 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:41:56.902755 containerd[1563]: time="2025-07-11T07:41:56.902665321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:56.903454 containerd[1563]: time="2025-07-11T07:41:56.903427227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 07:41:56.905398 containerd[1563]: time="2025-07-11T07:41:56.905365455Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:56.908589 containerd[1563]: time="2025-07-11T07:41:56.908548572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:56.909612 containerd[1563]: time="2025-07-11T07:41:56.909578039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.420801621s" Jul 11 07:41:56.909822 containerd[1563]: time="2025-07-11T07:41:56.909707903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 07:41:56.911714 containerd[1563]: time="2025-07-11T07:41:56.911139031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 07:41:56.932783 containerd[1563]: time="2025-07-11T07:41:56.932739627Z" level=info msg="CreateContainer within sandbox \"1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 07:41:56.955142 containerd[1563]: time="2025-07-11T07:41:56.951242707Z" level=info msg="Container e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:56.954129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172836817.mount: Deactivated successfully. Jul 11 07:41:56.973057 containerd[1563]: time="2025-07-11T07:41:56.972996039Z" level=info msg="CreateContainer within sandbox \"1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505\"" Jul 11 07:41:56.974378 containerd[1563]: time="2025-07-11T07:41:56.973930187Z" level=info msg="StartContainer for \"e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505\"" Jul 11 07:41:56.976767 containerd[1563]: time="2025-07-11T07:41:56.976707605Z" level=info msg="connecting to shim e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505" address="unix:///run/containerd/s/b3b9d97d0c93fb9f3e7d88877bf328cfa83bdfe042de6128a61cd28d18604bfd" protocol=ttrpc version=3 Jul 11 07:41:57.024244 systemd[1]: Started cri-containerd-e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505.scope - libcontainer container e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505. Jul 11 07:41:57.176100 containerd[1563]: time="2025-07-11T07:41:57.175912300Z" level=info msg="StartContainer for \"e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505\" returns successfully" Jul 11 07:41:57.559346 kubelet[2804]: E0711 07:41:57.559300 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.559346 kubelet[2804]: W0711 07:41:57.559331 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.559814 kubelet[2804]: E0711 07:41:57.559364 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.559814 kubelet[2804]: E0711 07:41:57.559632 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.559814 kubelet[2804]: W0711 07:41:57.559641 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.559814 kubelet[2804]: E0711 07:41:57.559776 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.561284 kubelet[2804]: E0711 07:41:57.561145 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.561284 kubelet[2804]: W0711 07:41:57.561158 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.562158 kubelet[2804]: E0711 07:41:57.562045 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.562319 kubelet[2804]: E0711 07:41:57.562298 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.562319 kubelet[2804]: W0711 07:41:57.562313 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.563083 kubelet[2804]: E0711 07:41:57.562324 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.563153 kubelet[2804]: E0711 07:41:57.563112 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.563153 kubelet[2804]: W0711 07:41:57.563123 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.563153 kubelet[2804]: E0711 07:41:57.563134 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.563325 kubelet[2804]: E0711 07:41:57.563300 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.563325 kubelet[2804]: W0711 07:41:57.563315 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.563325 kubelet[2804]: E0711 07:41:57.563325 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.563538 kubelet[2804]: E0711 07:41:57.563493 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.563538 kubelet[2804]: W0711 07:41:57.563504 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.563538 kubelet[2804]: E0711 07:41:57.563514 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.563741 kubelet[2804]: E0711 07:41:57.563702 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.563741 kubelet[2804]: W0711 07:41:57.563713 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.563741 kubelet[2804]: E0711 07:41:57.563723 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.564456 kubelet[2804]: E0711 07:41:57.564426 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.564456 kubelet[2804]: W0711 07:41:57.564442 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.564456 kubelet[2804]: E0711 07:41:57.564452 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.564614 kubelet[2804]: E0711 07:41:57.564591 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.564614 kubelet[2804]: W0711 07:41:57.564600 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.564614 kubelet[2804]: E0711 07:41:57.564609 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.565137 kubelet[2804]: E0711 07:41:57.565089 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.565137 kubelet[2804]: W0711 07:41:57.565101 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.565137 kubelet[2804]: E0711 07:41:57.565111 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.565302 kubelet[2804]: E0711 07:41:57.565269 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.565302 kubelet[2804]: W0711 07:41:57.565280 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.565302 kubelet[2804]: E0711 07:41:57.565289 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.565481 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.567091 kubelet[2804]: W0711 07:41:57.565492 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.565502 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.565778 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.567091 kubelet[2804]: W0711 07:41:57.565788 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.565798 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.566724 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.567091 kubelet[2804]: W0711 07:41:57.566735 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.567091 kubelet[2804]: E0711 07:41:57.566747 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.659639 kubelet[2804]: E0711 07:41:57.659408 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.659639 kubelet[2804]: W0711 07:41:57.659438 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.659639 kubelet[2804]: E0711 07:41:57.659462 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.660836 kubelet[2804]: E0711 07:41:57.660819 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.660941 kubelet[2804]: W0711 07:41:57.660924 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.661163 kubelet[2804]: E0711 07:41:57.661034 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.661477 kubelet[2804]: E0711 07:41:57.661438 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.661530 kubelet[2804]: W0711 07:41:57.661474 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.663113 kubelet[2804]: E0711 07:41:57.663079 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.663807 kubelet[2804]: E0711 07:41:57.663779 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.663807 kubelet[2804]: W0711 07:41:57.663797 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.664001 kubelet[2804]: E0711 07:41:57.663938 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.664393 kubelet[2804]: E0711 07:41:57.664370 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.664614 kubelet[2804]: W0711 07:41:57.664387 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.664913 kubelet[2804]: E0711 07:41:57.664882 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.668512 kubelet[2804]: E0711 07:41:57.668477 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.668512 kubelet[2804]: W0711 07:41:57.668505 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.668770 kubelet[2804]: E0711 07:41:57.668595 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.668849 kubelet[2804]: E0711 07:41:57.668837 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.668905 kubelet[2804]: W0711 07:41:57.668853 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.669157 kubelet[2804]: E0711 07:41:57.669069 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.669245 kubelet[2804]: E0711 07:41:57.669222 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.669245 kubelet[2804]: W0711 07:41:57.669240 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.669340 kubelet[2804]: E0711 07:41:57.669259 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.669523 kubelet[2804]: E0711 07:41:57.669494 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.669523 kubelet[2804]: W0711 07:41:57.669514 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.669696 kubelet[2804]: E0711 07:41:57.669527 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.669770 kubelet[2804]: E0711 07:41:57.669743 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.669770 kubelet[2804]: W0711 07:41:57.669755 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.669954 kubelet[2804]: E0711 07:41:57.669906 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.670044 kubelet[2804]: E0711 07:41:57.670020 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.670044 kubelet[2804]: W0711 07:41:57.670032 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.670167 kubelet[2804]: E0711 07:41:57.670136 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.670765 kubelet[2804]: E0711 07:41:57.670663 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.670765 kubelet[2804]: W0711 07:41:57.670675 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.670765 kubelet[2804]: E0711 07:41:57.670697 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.672005 kubelet[2804]: E0711 07:41:57.671407 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.672005 kubelet[2804]: W0711 07:41:57.671429 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.672005 kubelet[2804]: E0711 07:41:57.671450 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.673844 kubelet[2804]: E0711 07:41:57.673819 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.674013 kubelet[2804]: W0711 07:41:57.673959 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.674129 kubelet[2804]: E0711 07:41:57.674112 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.674439 kubelet[2804]: E0711 07:41:57.674410 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.674439 kubelet[2804]: W0711 07:41:57.674432 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.675136 kubelet[2804]: E0711 07:41:57.674454 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.675193 kubelet[2804]: E0711 07:41:57.675152 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.675193 kubelet[2804]: W0711 07:41:57.675164 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.675193 kubelet[2804]: E0711 07:41:57.675177 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.675398 kubelet[2804]: E0711 07:41:57.675377 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.675398 kubelet[2804]: W0711 07:41:57.675392 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.675485 kubelet[2804]: E0711 07:41:57.675404 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:57.675787 kubelet[2804]: E0711 07:41:57.675766 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:57.675787 kubelet[2804]: W0711 07:41:57.675785 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:57.675858 kubelet[2804]: E0711 07:41:57.675798 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.152059 kubelet[2804]: E0711 07:41:58.151836 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:41:58.474128 kubelet[2804]: I0711 07:41:58.473452 2804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:41:58.477246 kubelet[2804]: E0711 07:41:58.477187 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.477458 kubelet[2804]: W0711 07:41:58.477421 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.477697 kubelet[2804]: E0711 07:41:58.477660 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.478646 kubelet[2804]: E0711 07:41:58.478380 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.478646 kubelet[2804]: W0711 07:41:58.478412 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.478646 kubelet[2804]: E0711 07:41:58.478439 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.479249 kubelet[2804]: E0711 07:41:58.479205 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.479468 kubelet[2804]: W0711 07:41:58.479435 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.479621 kubelet[2804]: E0711 07:41:58.479593 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.480738 kubelet[2804]: E0711 07:41:58.480404 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.480738 kubelet[2804]: W0711 07:41:58.480435 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.480738 kubelet[2804]: E0711 07:41:58.480462 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.481296 kubelet[2804]: E0711 07:41:58.481263 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.481522 kubelet[2804]: W0711 07:41:58.481487 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.481675 kubelet[2804]: E0711 07:41:58.481647 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.482743 kubelet[2804]: E0711 07:41:58.482665 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.482912 kubelet[2804]: W0711 07:41:58.482880 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.483402 kubelet[2804]: E0711 07:41:58.483087 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.483637 kubelet[2804]: E0711 07:41:58.483606 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.484154 kubelet[2804]: W0711 07:41:58.483826 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.484154 kubelet[2804]: E0711 07:41:58.483865 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.484561 kubelet[2804]: E0711 07:41:58.484527 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.484854 kubelet[2804]: W0711 07:41:58.484817 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.485340 kubelet[2804]: E0711 07:41:58.485084 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.485615 kubelet[2804]: E0711 07:41:58.485583 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.485805 kubelet[2804]: W0711 07:41:58.485775 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.485964 kubelet[2804]: E0711 07:41:58.485937 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.486794 kubelet[2804]: E0711 07:41:58.486583 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.486794 kubelet[2804]: W0711 07:41:58.486613 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.486794 kubelet[2804]: E0711 07:41:58.486638 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.488043 kubelet[2804]: E0711 07:41:58.487954 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.488453 kubelet[2804]: W0711 07:41:58.488052 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.488453 kubelet[2804]: E0711 07:41:58.488082 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.488721 kubelet[2804]: E0711 07:41:58.488494 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.488721 kubelet[2804]: W0711 07:41:58.488519 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.488721 kubelet[2804]: E0711 07:41:58.488545 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.489246 kubelet[2804]: E0711 07:41:58.489192 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.489507 kubelet[2804]: W0711 07:41:58.489309 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.489507 kubelet[2804]: E0711 07:41:58.489470 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.490082 kubelet[2804]: E0711 07:41:58.490030 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.490082 kubelet[2804]: W0711 07:41:58.490070 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.490309 kubelet[2804]: E0711 07:41:58.490096 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.490551 kubelet[2804]: E0711 07:41:58.490487 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.490551 kubelet[2804]: W0711 07:41:58.490525 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.490551 kubelet[2804]: E0711 07:41:58.490548 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.577802 kubelet[2804]: E0711 07:41:58.577412 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.577802 kubelet[2804]: W0711 07:41:58.577505 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.577802 kubelet[2804]: E0711 07:41:58.577547 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.578879 kubelet[2804]: E0711 07:41:58.578763 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.578879 kubelet[2804]: W0711 07:41:58.578804 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.578879 kubelet[2804]: E0711 07:41:58.578853 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.579401 kubelet[2804]: E0711 07:41:58.579341 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.579401 kubelet[2804]: W0711 07:41:58.579381 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.579851 kubelet[2804]: E0711 07:41:58.579616 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.579851 kubelet[2804]: E0711 07:41:58.579694 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.579851 kubelet[2804]: W0711 07:41:58.579717 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.580103 kubelet[2804]: E0711 07:41:58.580062 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.580103 kubelet[2804]: W0711 07:41:58.580086 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.580303 kubelet[2804]: E0711 07:41:58.580110 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.580801 kubelet[2804]: E0711 07:41:58.580477 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.580801 kubelet[2804]: W0711 07:41:58.580515 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.580801 kubelet[2804]: E0711 07:41:58.580541 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.580801 kubelet[2804]: E0711 07:41:58.580574 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.581476 kubelet[2804]: E0711 07:41:58.581412 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.581710 kubelet[2804]: W0711 07:41:58.581632 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.581953 kubelet[2804]: E0711 07:41:58.581884 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.582849 kubelet[2804]: E0711 07:41:58.582661 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.582849 kubelet[2804]: W0711 07:41:58.582792 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.583365 kubelet[2804]: E0711 07:41:58.583199 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.583657 kubelet[2804]: E0711 07:41:58.583609 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.583657 kubelet[2804]: W0711 07:41:58.583647 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.584222 kubelet[2804]: E0711 07:41:58.583689 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.584222 kubelet[2804]: E0711 07:41:58.584060 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.584222 kubelet[2804]: W0711 07:41:58.584084 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.584707 kubelet[2804]: E0711 07:41:58.584326 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.584707 kubelet[2804]: E0711 07:41:58.584423 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.584707 kubelet[2804]: W0711 07:41:58.584447 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.585580 kubelet[2804]: E0711 07:41:58.584914 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.585580 kubelet[2804]: W0711 07:41:58.584943 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.585580 kubelet[2804]: E0711 07:41:58.585052 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.585580 kubelet[2804]: E0711 07:41:58.585108 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.585580 kubelet[2804]: E0711 07:41:58.585431 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.585580 kubelet[2804]: W0711 07:41:58.585454 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.585580 kubelet[2804]: E0711 07:41:58.585481 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.586571 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.587748 kubelet[2804]: W0711 07:41:58.586596 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.586620 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.587150 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.587748 kubelet[2804]: W0711 07:41:58.587175 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.587201 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.587560 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.587748 kubelet[2804]: W0711 07:41:58.587586 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.587748 kubelet[2804]: E0711 07:41:58.587623 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.588593 kubelet[2804]: E0711 07:41:58.588460 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.588593 kubelet[2804]: W0711 07:41:58.588494 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.588593 kubelet[2804]: E0711 07:41:58.588532 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:58.589004 kubelet[2804]: E0711 07:41:58.588880 2804 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 07:41:58.589004 kubelet[2804]: W0711 07:41:58.588918 2804 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 07:41:58.589004 kubelet[2804]: E0711 07:41:58.588942 2804 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 07:41:59.218047 containerd[1563]: time="2025-07-11T07:41:59.217952753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:59.220351 containerd[1563]: time="2025-07-11T07:41:59.220295068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 07:41:59.222054 containerd[1563]: time="2025-07-11T07:41:59.221990322Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:59.227583 containerd[1563]: time="2025-07-11T07:41:59.227482265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:41:59.228410 containerd[1563]: time="2025-07-11T07:41:59.228362072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.317185491s" Jul 11 07:41:59.228535 containerd[1563]: time="2025-07-11T07:41:59.228515268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 07:41:59.231759 containerd[1563]: time="2025-07-11T07:41:59.231706162Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 07:41:59.259006 containerd[1563]: time="2025-07-11T07:41:59.258918550Z" level=info msg="Container fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:41:59.301757 containerd[1563]: time="2025-07-11T07:41:59.301675131Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\"" Jul 11 07:41:59.303744 containerd[1563]: time="2025-07-11T07:41:59.303675757Z" level=info msg="StartContainer for \"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\"" Jul 11 07:41:59.309007 containerd[1563]: time="2025-07-11T07:41:59.308873598Z" level=info msg="connecting to shim fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9" address="unix:///run/containerd/s/60c0fd3a759bb70ae6046dcc06a1719d809c34b79172f87e19512126f5b62b26" protocol=ttrpc version=3 Jul 11 07:41:59.355473 systemd[1]: Started cri-containerd-fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9.scope - libcontainer container fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9. Jul 11 07:41:59.429512 containerd[1563]: time="2025-07-11T07:41:59.429308752Z" level=info msg="StartContainer for \"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\" returns successfully" Jul 11 07:41:59.447146 systemd[1]: cri-containerd-fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9.scope: Deactivated successfully. Jul 11 07:41:59.455714 containerd[1563]: time="2025-07-11T07:41:59.455583806Z" level=info msg="received exit event container_id:\"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\" id:\"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\" pid:3494 exited_at:{seconds:1752219719 nanos:454414306}" Jul 11 07:41:59.456075 containerd[1563]: time="2025-07-11T07:41:59.455988423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\" id:\"fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9\" pid:3494 exited_at:{seconds:1752219719 nanos:454414306}" Jul 11 07:41:59.524724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9-rootfs.mount: Deactivated successfully. Jul 11 07:41:59.531597 kubelet[2804]: I0711 07:41:59.531480 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b49cd5fd5-nms9w" podStartSLOduration=4.107234509 podStartE2EDuration="8.531396105s" podCreationTimestamp="2025-07-11 07:41:51 +0000 UTC" firstStartedPulling="2025-07-11 07:41:52.486492267 +0000 UTC m=+23.719421343" lastFinishedPulling="2025-07-11 07:41:56.910653863 +0000 UTC m=+28.143582939" observedRunningTime="2025-07-11 07:41:57.521091656 +0000 UTC m=+28.754020742" watchObservedRunningTime="2025-07-11 07:41:59.531396105 +0000 UTC m=+30.764325191" Jul 11 07:42:00.151789 kubelet[2804]: E0711 07:42:00.151695 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:01.522384 containerd[1563]: time="2025-07-11T07:42:01.521453147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 07:42:01.602811 kubelet[2804]: I0711 07:42:01.602534 2804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:42:02.151736 kubelet[2804]: E0711 07:42:02.151489 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:04.153163 kubelet[2804]: E0711 07:42:04.152379 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:06.151652 kubelet[2804]: E0711 07:42:06.151557 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:08.151439 kubelet[2804]: E0711 07:42:08.151259 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:08.839345 containerd[1563]: time="2025-07-11T07:42:08.839191885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:08.842765 containerd[1563]: time="2025-07-11T07:42:08.842737578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 07:42:08.843820 containerd[1563]: time="2025-07-11T07:42:08.843750686Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:08.848551 containerd[1563]: time="2025-07-11T07:42:08.848436377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:08.849551 containerd[1563]: time="2025-07-11T07:42:08.849186087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 7.323261306s" Jul 11 07:42:08.849551 containerd[1563]: time="2025-07-11T07:42:08.849249056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 07:42:08.855672 containerd[1563]: time="2025-07-11T07:42:08.855609849Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 07:42:08.869738 containerd[1563]: time="2025-07-11T07:42:08.869296281Z" level=info msg="Container 86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:42:08.876457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833252047.mount: Deactivated successfully. Jul 11 07:42:08.901255 containerd[1563]: time="2025-07-11T07:42:08.901183261Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\"" Jul 11 07:42:08.902945 containerd[1563]: time="2025-07-11T07:42:08.902846571Z" level=info msg="StartContainer for \"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\"" Jul 11 07:42:08.906074 containerd[1563]: time="2025-07-11T07:42:08.906003938Z" level=info msg="connecting to shim 86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07" address="unix:///run/containerd/s/60c0fd3a759bb70ae6046dcc06a1719d809c34b79172f87e19512126f5b62b26" protocol=ttrpc version=3 Jul 11 07:42:08.954195 systemd[1]: Started cri-containerd-86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07.scope - libcontainer container 86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07. Jul 11 07:42:09.119180 containerd[1563]: time="2025-07-11T07:42:09.118847043Z" level=info msg="StartContainer for \"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\" returns successfully" Jul 11 07:42:10.151038 kubelet[2804]: E0711 07:42:10.150680 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:12.160046 kubelet[2804]: E0711 07:42:12.159681 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:12.418174 containerd[1563]: time="2025-07-11T07:42:12.417789860Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 07:42:12.426605 systemd[1]: cri-containerd-86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07.scope: Deactivated successfully. Jul 11 07:42:12.427807 containerd[1563]: time="2025-07-11T07:42:12.427772672Z" level=info msg="received exit event container_id:\"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\" id:\"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\" pid:3560 exited_at:{seconds:1752219732 nanos:427316199}" Jul 11 07:42:12.428646 containerd[1563]: time="2025-07-11T07:42:12.428272016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\" id:\"86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07\" pid:3560 exited_at:{seconds:1752219732 nanos:427316199}" Jul 11 07:42:12.430483 systemd[1]: cri-containerd-86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07.scope: Consumed 2.235s CPU time, 191.7M memory peak, 171.2M written to disk. Jul 11 07:42:12.487507 kubelet[2804]: I0711 07:42:12.487456 2804 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 07:42:12.488931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07-rootfs.mount: Deactivated successfully. Jul 11 07:42:12.755273 systemd[1]: Created slice kubepods-burstable-pod64e1eb3d_fa8f_4852_b1eb_a0c30aa60ee6.slice - libcontainer container kubepods-burstable-pod64e1eb3d_fa8f_4852_b1eb_a0c30aa60ee6.slice. Jul 11 07:42:12.770331 systemd[1]: Created slice kubepods-burstable-pod53c06b1f_c154_48d5_b67f_3acf36516035.slice - libcontainer container kubepods-burstable-pod53c06b1f_c154_48d5_b67f_3acf36516035.slice. Jul 11 07:42:12.790393 systemd[1]: Created slice kubepods-besteffort-pod40383d9a_5fd3_45e2_be69_48ac62030be0.slice - libcontainer container kubepods-besteffort-pod40383d9a_5fd3_45e2_be69_48ac62030be0.slice. Jul 11 07:42:12.791611 kubelet[2804]: I0711 07:42:12.791510 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6-config-volume\") pod \"coredns-7c65d6cfc9-7qjzp\" (UID: \"64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6\") " pod="kube-system/coredns-7c65d6cfc9-7qjzp" Jul 11 07:42:12.791787 kubelet[2804]: I0711 07:42:12.791643 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbqg8\" (UniqueName: \"kubernetes.io/projected/64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6-kube-api-access-vbqg8\") pod \"coredns-7c65d6cfc9-7qjzp\" (UID: \"64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6\") " pod="kube-system/coredns-7c65d6cfc9-7qjzp" Jul 11 07:42:12.802071 systemd[1]: Created slice kubepods-besteffort-podcf924fec_bc5d_4fed_9b11_0ab45a34d4a6.slice - libcontainer container kubepods-besteffort-podcf924fec_bc5d_4fed_9b11_0ab45a34d4a6.slice. Jul 11 07:42:12.812446 systemd[1]: Created slice kubepods-besteffort-podc1f33f55_a860_486e_bf1a_b91510a46c1d.slice - libcontainer container kubepods-besteffort-podc1f33f55_a860_486e_bf1a_b91510a46c1d.slice. Jul 11 07:42:12.818859 systemd[1]: Created slice kubepods-besteffort-pod4abaf656_f2e8_4404_bfd1_0657de6a798a.slice - libcontainer container kubepods-besteffort-pod4abaf656_f2e8_4404_bfd1_0657de6a798a.slice. Jul 11 07:42:12.826301 systemd[1]: Created slice kubepods-besteffort-pod4d2ffc01_365d_42db_8763_7ec53842a98f.slice - libcontainer container kubepods-besteffort-pod4d2ffc01_365d_42db_8763_7ec53842a98f.slice. Jul 11 07:42:12.892223 kubelet[2804]: I0711 07:42:12.892082 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c06b1f-c154-48d5-b67f-3acf36516035-config-volume\") pod \"coredns-7c65d6cfc9-kvgnr\" (UID: \"53c06b1f-c154-48d5-b67f-3acf36516035\") " pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:12.892223 kubelet[2804]: I0711 07:42:12.892187 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d2ffc01-365d-42db-8763-7ec53842a98f-goldmane-key-pair\") pod \"goldmane-58fd7646b9-hczk7\" (UID: \"4d2ffc01-365d-42db-8763-7ec53842a98f\") " pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:12.892688 kubelet[2804]: I0711 07:42:12.892271 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjwm\" (UniqueName: \"kubernetes.io/projected/4d2ffc01-365d-42db-8763-7ec53842a98f-kube-api-access-7pjwm\") pod \"goldmane-58fd7646b9-hczk7\" (UID: \"4d2ffc01-365d-42db-8763-7ec53842a98f\") " pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:12.892688 kubelet[2804]: I0711 07:42:12.892352 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbvh\" (UniqueName: \"kubernetes.io/projected/c1f33f55-a860-486e-bf1a-b91510a46c1d-kube-api-access-hkbvh\") pod \"calico-apiserver-667bcfd89f-2krz4\" (UID: \"c1f33f55-a860-486e-bf1a-b91510a46c1d\") " pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" Jul 11 07:42:12.892688 kubelet[2804]: I0711 07:42:12.892535 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1f33f55-a860-486e-bf1a-b91510a46c1d-calico-apiserver-certs\") pod \"calico-apiserver-667bcfd89f-2krz4\" (UID: \"c1f33f55-a860-486e-bf1a-b91510a46c1d\") " pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" Jul 11 07:42:12.892688 kubelet[2804]: I0711 07:42:12.892595 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf45q\" (UniqueName: \"kubernetes.io/projected/53c06b1f-c154-48d5-b67f-3acf36516035-kube-api-access-bf45q\") pod \"coredns-7c65d6cfc9-kvgnr\" (UID: \"53c06b1f-c154-48d5-b67f-3acf36516035\") " pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:12.892688 kubelet[2804]: I0711 07:42:12.892651 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d2ffc01-365d-42db-8763-7ec53842a98f-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-hczk7\" (UID: \"4d2ffc01-365d-42db-8763-7ec53842a98f\") " pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:12.893403 kubelet[2804]: I0711 07:42:12.892723 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbhn\" (UniqueName: \"kubernetes.io/projected/40383d9a-5fd3-45e2-be69-48ac62030be0-kube-api-access-blbhn\") pod \"calico-kube-controllers-8644849955-pzffc\" (UID: \"40383d9a-5fd3-45e2-be69-48ac62030be0\") " pod="calico-system/calico-kube-controllers-8644849955-pzffc" Jul 11 07:42:12.893403 kubelet[2804]: I0711 07:42:12.892785 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4abaf656-f2e8-4404-bfd1-0657de6a798a-calico-apiserver-certs\") pod \"calico-apiserver-667bcfd89f-qbsvk\" (UID: \"4abaf656-f2e8-4404-bfd1-0657de6a798a\") " pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" Jul 11 07:42:12.893403 kubelet[2804]: I0711 07:42:12.892841 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-ca-bundle\") pod \"whisker-768fbd65fb-6p6lm\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:12.893403 kubelet[2804]: I0711 07:42:12.892933 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc5x\" (UniqueName: \"kubernetes.io/projected/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-kube-api-access-hbc5x\") pod \"whisker-768fbd65fb-6p6lm\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:12.893403 kubelet[2804]: I0711 07:42:12.893075 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2ffc01-365d-42db-8763-7ec53842a98f-config\") pod \"goldmane-58fd7646b9-hczk7\" (UID: \"4d2ffc01-365d-42db-8763-7ec53842a98f\") " pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:12.893936 kubelet[2804]: I0711 07:42:12.893140 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40383d9a-5fd3-45e2-be69-48ac62030be0-tigera-ca-bundle\") pod \"calico-kube-controllers-8644849955-pzffc\" (UID: \"40383d9a-5fd3-45e2-be69-48ac62030be0\") " pod="calico-system/calico-kube-controllers-8644849955-pzffc" Jul 11 07:42:12.893936 kubelet[2804]: I0711 07:42:12.893205 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7q6j\" (UniqueName: \"kubernetes.io/projected/4abaf656-f2e8-4404-bfd1-0657de6a798a-kube-api-access-z7q6j\") pod \"calico-apiserver-667bcfd89f-qbsvk\" (UID: \"4abaf656-f2e8-4404-bfd1-0657de6a798a\") " pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" Jul 11 07:42:12.893936 kubelet[2804]: I0711 07:42:12.893302 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-backend-key-pair\") pod \"whisker-768fbd65fb-6p6lm\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:13.074593 containerd[1563]: time="2025-07-11T07:42:13.074222074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qjzp,Uid:64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6,Namespace:kube-system,Attempt:0,}" Jul 11 07:42:13.081491 containerd[1563]: time="2025-07-11T07:42:13.081269781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,}" Jul 11 07:42:13.101724 containerd[1563]: time="2025-07-11T07:42:13.101585728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8644849955-pzffc,Uid:40383d9a-5fd3-45e2-be69-48ac62030be0,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:13.112091 containerd[1563]: time="2025-07-11T07:42:13.111959124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768fbd65fb-6p6lm,Uid:cf924fec-bc5d-4fed-9b11-0ab45a34d4a6,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:13.118914 containerd[1563]: time="2025-07-11T07:42:13.118844324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-2krz4,Uid:c1f33f55-a860-486e-bf1a-b91510a46c1d,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:42:13.123272 containerd[1563]: time="2025-07-11T07:42:13.123192667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-qbsvk,Uid:4abaf656-f2e8-4404-bfd1-0657de6a798a,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:42:13.134724 containerd[1563]: time="2025-07-11T07:42:13.134456005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hczk7,Uid:4d2ffc01-365d-42db-8763-7ec53842a98f,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:13.367398 containerd[1563]: time="2025-07-11T07:42:13.366603373Z" level=error msg="Failed to destroy network for sandbox \"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.373075 containerd[1563]: time="2025-07-11T07:42:13.372503871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768fbd65fb-6p6lm,Uid:cf924fec-bc5d-4fed-9b11-0ab45a34d4a6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.374380 kubelet[2804]: E0711 07:42:13.373921 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.374380 kubelet[2804]: E0711 07:42:13.374138 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:13.374380 kubelet[2804]: E0711 07:42:13.374193 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:13.374949 kubelet[2804]: E0711 07:42:13.374291 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-768fbd65fb-6p6lm_calico-system(cf924fec-bc5d-4fed-9b11-0ab45a34d4a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-768fbd65fb-6p6lm_calico-system(cf924fec-bc5d-4fed-9b11-0ab45a34d4a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fdbc4c7e18422cbe8fb411d98761ee141837fa21dd83009eedfddad7b42225c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-768fbd65fb-6p6lm" podUID="cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" Jul 11 07:42:13.395351 containerd[1563]: time="2025-07-11T07:42:13.395131039Z" level=error msg="Failed to destroy network for sandbox \"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.398388 containerd[1563]: time="2025-07-11T07:42:13.397903261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8644849955-pzffc,Uid:40383d9a-5fd3-45e2-be69-48ac62030be0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.398896 kubelet[2804]: E0711 07:42:13.398385 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.398896 kubelet[2804]: E0711 07:42:13.398468 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8644849955-pzffc" Jul 11 07:42:13.398896 kubelet[2804]: E0711 07:42:13.398495 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8644849955-pzffc" Jul 11 07:42:13.399227 kubelet[2804]: E0711 07:42:13.398561 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8644849955-pzffc_calico-system(40383d9a-5fd3-45e2-be69-48ac62030be0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8644849955-pzffc_calico-system(40383d9a-5fd3-45e2-be69-48ac62030be0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b229a444d1b55634a32b5fed06f3b93ba453caa52c570ff38e3b368d2da48598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8644849955-pzffc" podUID="40383d9a-5fd3-45e2-be69-48ac62030be0" Jul 11 07:42:13.423273 containerd[1563]: time="2025-07-11T07:42:13.423191690Z" level=error msg="Failed to destroy network for sandbox \"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.426018 containerd[1563]: time="2025-07-11T07:42:13.425855128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-2krz4,Uid:c1f33f55-a860-486e-bf1a-b91510a46c1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.427115 kubelet[2804]: E0711 07:42:13.426950 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.427532 kubelet[2804]: E0711 07:42:13.427103 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" Jul 11 07:42:13.427532 kubelet[2804]: E0711 07:42:13.427164 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" Jul 11 07:42:13.427532 kubelet[2804]: E0711 07:42:13.427221 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667bcfd89f-2krz4_calico-apiserver(c1f33f55-a860-486e-bf1a-b91510a46c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667bcfd89f-2krz4_calico-apiserver(c1f33f55-a860-486e-bf1a-b91510a46c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c950a57a554013dfc3551dccabd3de4610f5d5e1fac4c80de2a17937eb7ea412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" podUID="c1f33f55-a860-486e-bf1a-b91510a46c1d" Jul 11 07:42:13.436334 containerd[1563]: time="2025-07-11T07:42:13.436101273Z" level=error msg="Failed to destroy network for sandbox \"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.438989 containerd[1563]: time="2025-07-11T07:42:13.438876541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-qbsvk,Uid:4abaf656-f2e8-4404-bfd1-0657de6a798a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.439181 kubelet[2804]: E0711 07:42:13.439136 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.439733 kubelet[2804]: E0711 07:42:13.439206 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" Jul 11 07:42:13.439733 kubelet[2804]: E0711 07:42:13.439231 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" Jul 11 07:42:13.439733 kubelet[2804]: E0711 07:42:13.439281 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667bcfd89f-qbsvk_calico-apiserver(4abaf656-f2e8-4404-bfd1-0657de6a798a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667bcfd89f-qbsvk_calico-apiserver(4abaf656-f2e8-4404-bfd1-0657de6a798a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd1cb5e274d8795dfe0eab2b6eeb83f9ba7d356928780a0d7f58361886cacb87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" podUID="4abaf656-f2e8-4404-bfd1-0657de6a798a" Jul 11 07:42:13.449560 containerd[1563]: time="2025-07-11T07:42:13.449268442Z" level=error msg="Failed to destroy network for sandbox \"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.451865 containerd[1563]: time="2025-07-11T07:42:13.451227776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.452018 kubelet[2804]: E0711 07:42:13.451584 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.452018 kubelet[2804]: E0711 07:42:13.451655 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:13.452018 kubelet[2804]: E0711 07:42:13.451692 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:13.452218 kubelet[2804]: E0711 07:42:13.451739 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kvgnr_kube-system(53c06b1f-c154-48d5-b67f-3acf36516035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kvgnr_kube-system(53c06b1f-c154-48d5-b67f-3acf36516035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3808e4cf955d7341c2af1c9c1f72d9c18b814ad571f743c8ef85768d888ac36f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kvgnr" podUID="53c06b1f-c154-48d5-b67f-3acf36516035" Jul 11 07:42:13.455275 containerd[1563]: time="2025-07-11T07:42:13.455227510Z" level=error msg="Failed to destroy network for sandbox \"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.455818 containerd[1563]: time="2025-07-11T07:42:13.455694904Z" level=error msg="Failed to destroy network for sandbox \"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.457675 containerd[1563]: time="2025-07-11T07:42:13.457483987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hczk7,Uid:4d2ffc01-365d-42db-8763-7ec53842a98f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.458790 kubelet[2804]: E0711 07:42:13.457855 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.458790 kubelet[2804]: E0711 07:42:13.457933 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:13.459171 kubelet[2804]: E0711 07:42:13.458940 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hczk7" Jul 11 07:42:13.459372 kubelet[2804]: E0711 07:42:13.459052 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-hczk7_calico-system(4d2ffc01-365d-42db-8763-7ec53842a98f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-hczk7_calico-system(4d2ffc01-365d-42db-8763-7ec53842a98f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02cf0181b524838ba9b75cb69b50bbb5a13559b04972e5ce40be1c95746ad7ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-hczk7" podUID="4d2ffc01-365d-42db-8763-7ec53842a98f" Jul 11 07:42:13.459776 containerd[1563]: time="2025-07-11T07:42:13.459744371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qjzp,Uid:64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.460140 kubelet[2804]: E0711 07:42:13.460101 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:13.460227 kubelet[2804]: E0711 07:42:13.460162 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7qjzp" Jul 11 07:42:13.460227 kubelet[2804]: E0711 07:42:13.460190 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7qjzp" Jul 11 07:42:13.460309 kubelet[2804]: E0711 07:42:13.460263 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7qjzp_kube-system(64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7qjzp_kube-system(64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e18f7bdc2ce4a7813ac2c830f56dafe6e4620c626b2ccf0a0b73277494db419\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7qjzp" podUID="64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6" Jul 11 07:42:13.623726 containerd[1563]: time="2025-07-11T07:42:13.622154286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 07:42:14.168009 systemd[1]: Created slice kubepods-besteffort-pod3c88e405_5760_45b1_ac61_26a4ddd63df5.slice - libcontainer container kubepods-besteffort-pod3c88e405_5760_45b1_ac61_26a4ddd63df5.slice. Jul 11 07:42:14.174947 containerd[1563]: time="2025-07-11T07:42:14.174868119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:14.297169 containerd[1563]: time="2025-07-11T07:42:14.297028001Z" level=error msg="Failed to destroy network for sandbox \"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:14.301487 containerd[1563]: time="2025-07-11T07:42:14.301321278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:14.302636 kubelet[2804]: E0711 07:42:14.302499 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:14.302949 kubelet[2804]: E0711 07:42:14.302913 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:42:14.303315 kubelet[2804]: E0711 07:42:14.303205 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:42:14.303737 kubelet[2804]: E0711 07:42:14.303669 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlrrv_calico-system(3c88e405-5760-45b1-ac61-26a4ddd63df5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlrrv_calico-system(3c88e405-5760-45b1-ac61-26a4ddd63df5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efffc8d0f266ff1324072c795cbea3a100b79107198b3d5d2bf1d17126e31931\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:14.308598 systemd[1]: run-netns-cni\x2d01908a51\x2db598\x2d4690\x2d8431\x2dafeba67713b2.mount: Deactivated successfully. Jul 11 07:42:24.153224 containerd[1563]: time="2025-07-11T07:42:24.152694488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,}" Jul 11 07:42:24.318496 containerd[1563]: time="2025-07-11T07:42:24.318413522Z" level=error msg="Failed to destroy network for sandbox \"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:24.320909 systemd[1]: run-netns-cni\x2d01374b47\x2d5674\x2d72cc\x2d90bc\x2dcdc6d1022f2f.mount: Deactivated successfully. Jul 11 07:42:24.327011 containerd[1563]: time="2025-07-11T07:42:24.326522133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:24.328375 kubelet[2804]: E0711 07:42:24.328229 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:24.329709 kubelet[2804]: E0711 07:42:24.329306 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:24.329709 kubelet[2804]: E0711 07:42:24.329415 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kvgnr" Jul 11 07:42:24.329709 kubelet[2804]: E0711 07:42:24.329529 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kvgnr_kube-system(53c06b1f-c154-48d5-b67f-3acf36516035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kvgnr_kube-system(53c06b1f-c154-48d5-b67f-3acf36516035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c49f125c659ffee0e95b2474988d3827721ff857376ab97ade8d0ac359701dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kvgnr" podUID="53c06b1f-c154-48d5-b67f-3acf36516035" Jul 11 07:42:25.154046 containerd[1563]: time="2025-07-11T07:42:25.153204728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:25.155678 containerd[1563]: time="2025-07-11T07:42:25.155645153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768fbd65fb-6p6lm,Uid:cf924fec-bc5d-4fed-9b11-0ab45a34d4a6,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:25.378420 containerd[1563]: time="2025-07-11T07:42:25.378338777Z" level=error msg="Failed to destroy network for sandbox \"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.381848 systemd[1]: run-netns-cni\x2d016830eb\x2d7b8a\x2de410\x2d7425\x2d31feb7abbe5e.mount: Deactivated successfully. Jul 11 07:42:25.384994 containerd[1563]: time="2025-07-11T07:42:25.384398731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.385120 kubelet[2804]: E0711 07:42:25.384738 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.385120 kubelet[2804]: E0711 07:42:25.384865 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:42:25.385120 kubelet[2804]: E0711 07:42:25.384894 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlrrv" Jul 11 07:42:25.385525 kubelet[2804]: E0711 07:42:25.385082 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlrrv_calico-system(3c88e405-5760-45b1-ac61-26a4ddd63df5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlrrv_calico-system(3c88e405-5760-45b1-ac61-26a4ddd63df5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f0011b8bd1036cd3fb79338c2ff4882f1be8bba1631ed99943644946eb85282\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlrrv" podUID="3c88e405-5760-45b1-ac61-26a4ddd63df5" Jul 11 07:42:25.453649 containerd[1563]: time="2025-07-11T07:42:25.453284734Z" level=error msg="Failed to destroy network for sandbox \"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.456048 systemd[1]: run-netns-cni\x2d15636a13\x2d81b6\x2d3f27\x2da070\x2d1b2ff2c38e25.mount: Deactivated successfully. Jul 11 07:42:25.459155 containerd[1563]: time="2025-07-11T07:42:25.459044090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768fbd65fb-6p6lm,Uid:cf924fec-bc5d-4fed-9b11-0ab45a34d4a6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.461135 kubelet[2804]: E0711 07:42:25.459473 2804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 07:42:25.461135 kubelet[2804]: E0711 07:42:25.459540 2804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:25.461135 kubelet[2804]: E0711 07:42:25.459576 2804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768fbd65fb-6p6lm" Jul 11 07:42:25.461347 kubelet[2804]: E0711 07:42:25.459679 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-768fbd65fb-6p6lm_calico-system(cf924fec-bc5d-4fed-9b11-0ab45a34d4a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-768fbd65fb-6p6lm_calico-system(cf924fec-bc5d-4fed-9b11-0ab45a34d4a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d891c7dc4e577825632561c29eff6442b5a6cdbb87b4b2a80e27dfda9870313\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-768fbd65fb-6p6lm" podUID="cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" Jul 11 07:42:26.372634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700608561.mount: Deactivated successfully. Jul 11 07:42:26.420280 containerd[1563]: time="2025-07-11T07:42:26.420131632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:26.422227 containerd[1563]: time="2025-07-11T07:42:26.421211739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 07:42:26.423581 containerd[1563]: time="2025-07-11T07:42:26.423497151Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:26.429491 containerd[1563]: time="2025-07-11T07:42:26.429317681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:42:26.430041 containerd[1563]: time="2025-07-11T07:42:26.429814218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.807513615s" Jul 11 07:42:26.430041 containerd[1563]: time="2025-07-11T07:42:26.429844575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 07:42:26.453714 containerd[1563]: time="2025-07-11T07:42:26.451103370Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 07:42:26.481694 containerd[1563]: time="2025-07-11T07:42:26.481642031Z" level=info msg="Container 1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:42:26.484707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189184625.mount: Deactivated successfully. Jul 11 07:42:26.501554 containerd[1563]: time="2025-07-11T07:42:26.501423228Z" level=info msg="CreateContainer within sandbox \"42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\"" Jul 11 07:42:26.502188 containerd[1563]: time="2025-07-11T07:42:26.502120102Z" level=info msg="StartContainer for \"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\"" Jul 11 07:42:26.504435 containerd[1563]: time="2025-07-11T07:42:26.504338328Z" level=info msg="connecting to shim 1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160" address="unix:///run/containerd/s/60c0fd3a759bb70ae6046dcc06a1719d809c34b79172f87e19512126f5b62b26" protocol=ttrpc version=3 Jul 11 07:42:26.592199 systemd[1]: Started cri-containerd-1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160.scope - libcontainer container 1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160. Jul 11 07:42:26.676075 containerd[1563]: time="2025-07-11T07:42:26.674821834Z" level=info msg="StartContainer for \"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" returns successfully" Jul 11 07:42:26.863427 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 07:42:26.863755 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 07:42:27.163816 containerd[1563]: time="2025-07-11T07:42:27.163456948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-qbsvk,Uid:4abaf656-f2e8-4404-bfd1-0657de6a798a,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:42:27.166015 containerd[1563]: time="2025-07-11T07:42:27.164774222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qjzp,Uid:64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6,Namespace:kube-system,Attempt:0,}" Jul 11 07:42:27.299527 kubelet[2804]: I0711 07:42:27.299475 2804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-ca-bundle\") pod \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " Jul 11 07:42:27.302027 kubelet[2804]: I0711 07:42:27.301647 2804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc5x\" (UniqueName: \"kubernetes.io/projected/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-kube-api-access-hbc5x\") pod \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " Jul 11 07:42:27.304125 kubelet[2804]: I0711 07:42:27.304010 2804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-backend-key-pair\") pod \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\" (UID: \"cf924fec-bc5d-4fed-9b11-0ab45a34d4a6\") " Jul 11 07:42:27.306229 kubelet[2804]: I0711 07:42:27.301515 2804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" (UID: "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 07:42:27.306752 kubelet[2804]: I0711 07:42:27.306207 2804 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-ca-bundle\") on node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" DevicePath \"\"" Jul 11 07:42:27.314021 kubelet[2804]: I0711 07:42:27.313562 2804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-kube-api-access-hbc5x" (OuterVolumeSpecName: "kube-api-access-hbc5x") pod "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" (UID: "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6"). InnerVolumeSpecName "kube-api-access-hbc5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 07:42:27.323089 kubelet[2804]: I0711 07:42:27.323034 2804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" (UID: "cf924fec-bc5d-4fed-9b11-0ab45a34d4a6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 07:42:27.378076 systemd[1]: var-lib-kubelet-pods-cf924fec\x2dbc5d\x2d4fed\x2d9b11\x2d0ab45a34d4a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhbc5x.mount: Deactivated successfully. Jul 11 07:42:27.378187 systemd[1]: var-lib-kubelet-pods-cf924fec\x2dbc5d\x2d4fed\x2d9b11\x2d0ab45a34d4a6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 07:42:27.410168 kubelet[2804]: I0711 07:42:27.410096 2804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc5x\" (UniqueName: \"kubernetes.io/projected/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-kube-api-access-hbc5x\") on node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" DevicePath \"\"" Jul 11 07:42:27.410168 kubelet[2804]: I0711 07:42:27.410135 2804 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6-whisker-backend-key-pair\") on node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\" DevicePath \"\"" Jul 11 07:42:27.694823 systemd[1]: Removed slice kubepods-besteffort-podcf924fec_bc5d_4fed_9b11_0ab45a34d4a6.slice - libcontainer container kubepods-besteffort-podcf924fec_bc5d_4fed_9b11_0ab45a34d4a6.slice. Jul 11 07:42:27.746479 kubelet[2804]: I0711 07:42:27.746120 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m2tf8" podStartSLOduration=2.816435733 podStartE2EDuration="36.746022906s" podCreationTimestamp="2025-07-11 07:41:51 +0000 UTC" firstStartedPulling="2025-07-11 07:41:52.502131929 +0000 UTC m=+23.735061005" lastFinishedPulling="2025-07-11 07:42:26.431719102 +0000 UTC m=+57.664648178" observedRunningTime="2025-07-11 07:42:27.744336074 +0000 UTC m=+58.977265180" watchObservedRunningTime="2025-07-11 07:42:27.746022906 +0000 UTC m=+58.978951982" Jul 11 07:42:27.889707 systemd-networkd[1457]: cali7661592b523: Link UP Jul 11 07:42:27.893100 systemd-networkd[1457]: cali7661592b523: Gained carrier Jul 11 07:42:27.991214 containerd[1563]: 2025-07-11 07:42:27.260 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:42:27.991214 containerd[1563]: 2025-07-11 07:42:27.358 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0 coredns-7c65d6cfc9- kube-system 64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6 822 0 2025-07-11 07:41:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal coredns-7c65d6cfc9-7qjzp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7661592b523 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-" Jul 11 07:42:27.991214 containerd[1563]: 2025-07-11 07:42:27.358 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.991214 containerd[1563]: 2025-07-11 07:42:27.471 [INFO][3970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" HandleID="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.472 [INFO][3970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" HandleID="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c42b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"coredns-7c65d6cfc9-7qjzp", "timestamp":"2025-07-11 07:42:27.470992267 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.472 [INFO][3970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.474 [INFO][3970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.474 [INFO][3970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.498 [INFO][3970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.520 [INFO][3970] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.538 [INFO][3970] ipam/ipam.go 543: Ran out of existing affine blocks for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.542 [INFO][3970] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.994745 containerd[1563]: 2025-07-11 07:42:27.546 [INFO][3970] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.546 [INFO][3970] ipam/ipam.go 572: Found unclaimed block host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.546 [INFO][3970] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.610 [INFO][3970] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.610 [INFO][3970] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.613 [INFO][3970] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.617 [INFO][3970] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.620 [INFO][3970] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.620 [INFO][3970] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.709 [INFO][3970] ipam/ipam_block_reader_writer.go 267: Successfully created block Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.709 [INFO][3970] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.732 [INFO][3970] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.996181 containerd[1563]: 2025-07-11 07:42:27.732 [INFO][3970] ipam/ipam.go 607: Block '192.168.84.64/26' has 64 free ips which is more than 1 ips required. host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" subnet=192.168.84.64/26 Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.732 [INFO][3970] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.751 [INFO][3970] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101 Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.773 [INFO][3970] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.812 [INFO][3970] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.64/26] block=192.168.84.64/26 handle="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.817 [INFO][3970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.64/26] handle="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.819 [INFO][3970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:27.997362 containerd[1563]: 2025-07-11 07:42:27.819 [INFO][3970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.64/26] IPv6=[] ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" HandleID="k8s-pod-network.d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.840 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-7qjzp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7661592b523", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.841 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.64/32] ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.841 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7661592b523 ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.894 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.896 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101", Pod:"coredns-7c65d6cfc9-7qjzp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7661592b523", MAC:"26:2c:ca:fb:8d:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:27.998291 containerd[1563]: 2025-07-11 07:42:27.981 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qjzp" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--7qjzp-eth0" Jul 11 07:42:28.075052 systemd[1]: Created slice kubepods-besteffort-pod773f7a5a_1981_4bf1_999d_7e5476b2651d.slice - libcontainer container kubepods-besteffort-pod773f7a5a_1981_4bf1_999d_7e5476b2651d.slice. Jul 11 07:42:28.115790 systemd-networkd[1457]: cali29e05b8ad3b: Link UP Jul 11 07:42:28.118903 systemd-networkd[1457]: cali29e05b8ad3b: Gained carrier Jul 11 07:42:28.154708 containerd[1563]: time="2025-07-11T07:42:28.154646009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8644849955-pzffc,Uid:40383d9a-5fd3-45e2-be69-48ac62030be0,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:28.157678 containerd[1563]: time="2025-07-11T07:42:28.156256065Z" level=info msg="connecting to shim d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101" address="unix:///run/containerd/s/a79215fc268dd990d64f334797fed87e62be89826efde644c2842895498bae88" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:28.158679 containerd[1563]: time="2025-07-11T07:42:28.157148258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-2krz4,Uid:c1f33f55-a860-486e-bf1a-b91510a46c1d,Namespace:calico-apiserver,Attempt:0,}" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.259 [INFO][3942] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.357 [INFO][3942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0 calico-apiserver-667bcfd89f- calico-apiserver 4abaf656-f2e8-4404-bfd1-0657de6a798a 832 0 2025-07-11 07:41:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667bcfd89f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal calico-apiserver-667bcfd89f-qbsvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29e05b8ad3b [] [] }} ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.359 [INFO][3942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.482 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" HandleID="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.486 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" HandleID="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"calico-apiserver-667bcfd89f-qbsvk", "timestamp":"2025-07-11 07:42:27.482613177 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.487 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.823 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.823 [INFO][3968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.875 [INFO][3968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.936 [INFO][3968] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:27.987 [INFO][3968] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.016 [INFO][3968] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.031 [INFO][3968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.031 [INFO][3968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.045 [INFO][3968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2 Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.082 [INFO][3968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.099 [INFO][3968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.65/26] block=192.168.84.64/26 handle="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.099 [INFO][3968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.65/26] handle="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.100 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:28.199864 containerd[1563]: 2025-07-11 07:42:28.101 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.65/26] IPv6=[] ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" HandleID="k8s-pod-network.606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.107 [INFO][3942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0", GenerateName:"calico-apiserver-667bcfd89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4abaf656-f2e8-4404-bfd1-0657de6a798a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667bcfd89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"calico-apiserver-667bcfd89f-qbsvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e05b8ad3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.108 [INFO][3942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.65/32] ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.108 [INFO][3942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29e05b8ad3b ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.119 [INFO][3942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.121 [INFO][3942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0", GenerateName:"calico-apiserver-667bcfd89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4abaf656-f2e8-4404-bfd1-0657de6a798a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667bcfd89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2", Pod:"calico-apiserver-667bcfd89f-qbsvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e05b8ad3b", MAC:"8a:b2:67:02:bd:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.202089 containerd[1563]: 2025-07-11 07:42:28.170 [INFO][3942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-qbsvk" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--qbsvk-eth0" Jul 11 07:42:28.220646 kubelet[2804]: I0711 07:42:28.220383 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwg92\" (UniqueName: \"kubernetes.io/projected/773f7a5a-1981-4bf1-999d-7e5476b2651d-kube-api-access-nwg92\") pod \"whisker-b74959b8d-5c8n7\" (UID: \"773f7a5a-1981-4bf1-999d-7e5476b2651d\") " pod="calico-system/whisker-b74959b8d-5c8n7" Jul 11 07:42:28.220646 kubelet[2804]: I0711 07:42:28.220581 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/773f7a5a-1981-4bf1-999d-7e5476b2651d-whisker-backend-key-pair\") pod \"whisker-b74959b8d-5c8n7\" (UID: \"773f7a5a-1981-4bf1-999d-7e5476b2651d\") " pod="calico-system/whisker-b74959b8d-5c8n7" Jul 11 07:42:28.221174 kubelet[2804]: I0711 07:42:28.220614 2804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/773f7a5a-1981-4bf1-999d-7e5476b2651d-whisker-ca-bundle\") pod \"whisker-b74959b8d-5c8n7\" (UID: \"773f7a5a-1981-4bf1-999d-7e5476b2651d\") " pod="calico-system/whisker-b74959b8d-5c8n7" Jul 11 07:42:28.284608 systemd[1]: Started cri-containerd-d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101.scope - libcontainer container d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101. Jul 11 07:42:28.536804 containerd[1563]: time="2025-07-11T07:42:28.535685422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qjzp,Uid:64e1eb3d-fa8f-4852-b1eb-a0c30aa60ee6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101\"" Jul 11 07:42:28.552556 containerd[1563]: time="2025-07-11T07:42:28.551774055Z" level=info msg="CreateContainer within sandbox \"d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 07:42:28.579026 containerd[1563]: time="2025-07-11T07:42:28.577189386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"f0111e863d2ec8469c8d18c573be85f336f29e3b60c59b19b60fbbf961adf84a\" pid:3993 exit_status:1 exited_at:{seconds:1752219748 nanos:576227793}" Jul 11 07:42:28.595055 containerd[1563]: time="2025-07-11T07:42:28.594819557Z" level=info msg="connecting to shim 606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2" address="unix:///run/containerd/s/226195f79f9367007d54237241d1ac09520d9874fd77a6e8c2a910968757a3b0" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:28.669308 systemd[1]: Started cri-containerd-606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2.scope - libcontainer container 606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2. Jul 11 07:42:28.674383 containerd[1563]: time="2025-07-11T07:42:28.672072111Z" level=info msg="Container daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:42:28.680568 containerd[1563]: time="2025-07-11T07:42:28.680459419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b74959b8d-5c8n7,Uid:773f7a5a-1981-4bf1-999d-7e5476b2651d,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:28.708921 containerd[1563]: time="2025-07-11T07:42:28.707898859Z" level=info msg="CreateContainer within sandbox \"d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f\"" Jul 11 07:42:28.715709 containerd[1563]: time="2025-07-11T07:42:28.715537655Z" level=info msg="StartContainer for \"daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f\"" Jul 11 07:42:28.724878 containerd[1563]: time="2025-07-11T07:42:28.723644354Z" level=info msg="connecting to shim daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f" address="unix:///run/containerd/s/a79215fc268dd990d64f334797fed87e62be89826efde644c2842895498bae88" protocol=ttrpc version=3 Jul 11 07:42:28.752607 systemd-networkd[1457]: cali83bdd96ef02: Link UP Jul 11 07:42:28.752855 systemd-networkd[1457]: cali83bdd96ef02: Gained carrier Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.412 [INFO][4030] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.465 [INFO][4030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0 calico-kube-controllers-8644849955- calico-system 40383d9a-5fd3-45e2-be69-48ac62030be0 826 0 2025-07-11 07:41:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8644849955 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal calico-kube-controllers-8644849955-pzffc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali83bdd96ef02 [] [] }} ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.483 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.587 [INFO][4093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" HandleID="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.590 [INFO][4093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" HandleID="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"calico-kube-controllers-8644849955-pzffc", "timestamp":"2025-07-11 07:42:28.587301177 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.590 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.590 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.590 [INFO][4093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.605 [INFO][4093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.634 [INFO][4093] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.648 [INFO][4093] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.664 [INFO][4093] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.674 [INFO][4093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.674 [INFO][4093] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.681 [INFO][4093] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.711 [INFO][4093] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.726 [INFO][4093] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.67/26] block=192.168.84.64/26 handle="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.727 [INFO][4093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.67/26] handle="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.727 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:28.791658 containerd[1563]: 2025-07-11 07:42:28.728 [INFO][4093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.67/26] IPv6=[] ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" HandleID="k8s-pod-network.90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.743 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0", GenerateName:"calico-kube-controllers-8644849955-", Namespace:"calico-system", SelfLink:"", UID:"40383d9a-5fd3-45e2-be69-48ac62030be0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8644849955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"calico-kube-controllers-8644849955-pzffc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83bdd96ef02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.743 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.67/32] ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.743 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83bdd96ef02 ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.748 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.754 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0", GenerateName:"calico-kube-controllers-8644849955-", Namespace:"calico-system", SelfLink:"", UID:"40383d9a-5fd3-45e2-be69-48ac62030be0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8644849955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f", Pod:"calico-kube-controllers-8644849955-pzffc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83bdd96ef02", MAC:"ba:60:ac:6a:e2:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.793781 containerd[1563]: 2025-07-11 07:42:28.783 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" Namespace="calico-system" Pod="calico-kube-controllers-8644849955-pzffc" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--kube--controllers--8644849955--pzffc-eth0" Jul 11 07:42:28.801289 systemd[1]: Started cri-containerd-daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f.scope - libcontainer container daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f. Jul 11 07:42:28.886501 containerd[1563]: time="2025-07-11T07:42:28.886021550Z" level=info msg="connecting to shim 90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f" address="unix:///run/containerd/s/eaef16410ebcc86d718923e6bfe3fb64b183e837f5ccaa4718fa04f05ef9a95b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:28.889945 systemd-networkd[1457]: cali359e7353f09: Link UP Jul 11 07:42:28.894172 systemd-networkd[1457]: cali359e7353f09: Gained carrier Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.368 [INFO][4060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.468 [INFO][4060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0 calico-apiserver-667bcfd89f- calico-apiserver c1f33f55-a860-486e-bf1a-b91510a46c1d 833 0 2025-07-11 07:41:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667bcfd89f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal calico-apiserver-667bcfd89f-2krz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali359e7353f09 [] [] }} ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.482 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.654 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" HandleID="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.657 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" HandleID="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000274f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"calico-apiserver-667bcfd89f-2krz4", "timestamp":"2025-07-11 07:42:28.654388972 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.657 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.727 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.729 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.745 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.779 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.798 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.806 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.812 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.813 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.817 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.841 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.862 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.68/26] block=192.168.84.64/26 handle="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.863 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.68/26] handle="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.863 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:28.939549 containerd[1563]: 2025-07-11 07:42:28.863 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.68/26] IPv6=[] ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" HandleID="k8s-pod-network.5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.876 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0", GenerateName:"calico-apiserver-667bcfd89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1f33f55-a860-486e-bf1a-b91510a46c1d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667bcfd89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"calico-apiserver-667bcfd89f-2krz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali359e7353f09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.877 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.68/32] ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.877 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali359e7353f09 ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.902 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.904 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0", GenerateName:"calico-apiserver-667bcfd89f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1f33f55-a860-486e-bf1a-b91510a46c1d", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667bcfd89f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc", Pod:"calico-apiserver-667bcfd89f-2krz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali359e7353f09", MAC:"4a:bc:17:8e:5d:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:28.940600 containerd[1563]: 2025-07-11 07:42:28.933 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" Namespace="calico-apiserver" Pod="calico-apiserver-667bcfd89f-2krz4" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-calico--apiserver--667bcfd89f--2krz4-eth0" Jul 11 07:42:28.978076 containerd[1563]: time="2025-07-11T07:42:28.977815600Z" level=info msg="StartContainer for \"daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f\" returns successfully" Jul 11 07:42:28.995889 containerd[1563]: time="2025-07-11T07:42:28.995588459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-qbsvk,Uid:4abaf656-f2e8-4404-bfd1-0657de6a798a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2\"" Jul 11 07:42:29.000999 containerd[1563]: time="2025-07-11T07:42:29.000451921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 07:42:29.032435 systemd[1]: Started cri-containerd-90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f.scope - libcontainer container 90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f. Jul 11 07:42:29.065815 containerd[1563]: time="2025-07-11T07:42:29.065519709Z" level=info msg="connecting to shim 5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc" address="unix:///run/containerd/s/158e4d434add41bf5846dc122713d9c809826a17586c2142ea7bdc9bbb08c3b9" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:29.169910 containerd[1563]: time="2025-07-11T07:42:29.169018747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"8e9d033510309a692e2fd11d3038fdb187030e8780409c5fa36061b259a286ff\" pid:4195 exit_status:1 exited_at:{seconds:1752219749 nanos:162923193}" Jul 11 07:42:29.174455 kubelet[2804]: I0711 07:42:29.173580 2804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf924fec-bc5d-4fed-9b11-0ab45a34d4a6" path="/var/lib/kubelet/pods/cf924fec-bc5d-4fed-9b11-0ab45a34d4a6/volumes" Jul 11 07:42:29.177044 containerd[1563]: time="2025-07-11T07:42:29.172950511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hczk7,Uid:4d2ffc01-365d-42db-8763-7ec53842a98f,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:29.188235 systemd[1]: Started cri-containerd-5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc.scope - libcontainer container 5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc. Jul 11 07:42:29.305411 systemd-networkd[1457]: cali7661592b523: Gained IPv6LL Jul 11 07:42:29.338692 systemd-networkd[1457]: cali07bd41447db: Link UP Jul 11 07:42:29.341138 systemd-networkd[1457]: cali07bd41447db: Gained carrier Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:28.815 [INFO][4158] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:28.844 [INFO][4158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0 whisker-b74959b8d- calico-system 773f7a5a-1981-4bf1-999d-7e5476b2651d 917 0 2025-07-11 07:42:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b74959b8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal whisker-b74959b8d-5c8n7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali07bd41447db [] [] }} ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:28.844 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.049 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" HandleID="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.049 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" HandleID="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380990), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"whisker-b74959b8d-5c8n7", "timestamp":"2025-07-11 07:42:29.049501874 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.049 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.049 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.049 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.074 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.112 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.136 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.141 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.154 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.154 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.158 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058 Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.231 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.319 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.69/26] block=192.168.84.64/26 handle="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.319 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.69/26] handle="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.319 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:29.405517 containerd[1563]: 2025-07-11 07:42:29.319 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.69/26] IPv6=[] ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" HandleID="k8s-pod-network.0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.325 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0", GenerateName:"whisker-b74959b8d-", Namespace:"calico-system", SelfLink:"", UID:"773f7a5a-1981-4bf1-999d-7e5476b2651d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b74959b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"whisker-b74959b8d-5c8n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali07bd41447db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.325 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.69/32] ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.325 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07bd41447db ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.346 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.348 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0", GenerateName:"whisker-b74959b8d-", Namespace:"calico-system", SelfLink:"", UID:"773f7a5a-1981-4bf1-999d-7e5476b2651d", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b74959b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058", Pod:"whisker-b74959b8d-5c8n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali07bd41447db", MAC:"d6:ac:06:c6:f3:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:29.415836 containerd[1563]: 2025-07-11 07:42:29.394 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" Namespace="calico-system" Pod="whisker-b74959b8d-5c8n7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-whisker--b74959b8d--5c8n7-eth0" Jul 11 07:42:29.761318 kubelet[2804]: I0711 07:42:29.759947 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7qjzp" podStartSLOduration=56.759906302 podStartE2EDuration="56.759906302s" podCreationTimestamp="2025-07-11 07:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:42:29.759722615 +0000 UTC m=+60.992651701" watchObservedRunningTime="2025-07-11 07:42:29.759906302 +0000 UTC m=+60.992835378" Jul 11 07:42:29.945234 systemd-networkd[1457]: cali29e05b8ad3b: Gained IPv6LL Jul 11 07:42:30.009166 systemd-networkd[1457]: cali83bdd96ef02: Gained IPv6LL Jul 11 07:42:30.713267 systemd-networkd[1457]: cali359e7353f09: Gained IPv6LL Jul 11 07:42:30.777206 systemd-networkd[1457]: cali07bd41447db: Gained IPv6LL Jul 11 07:42:37.405145 containerd[1563]: time="2025-07-11T07:42:37.404600200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,}" Jul 11 07:42:37.665727 containerd[1563]: time="2025-07-11T07:42:37.665381887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8644849955-pzffc,Uid:40383d9a-5fd3-45e2-be69-48ac62030be0,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f\"" Jul 11 07:42:38.152525 containerd[1563]: time="2025-07-11T07:42:38.152394976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,}" Jul 11 07:42:38.787696 systemd-networkd[1457]: vxlan.calico: Link UP Jul 11 07:42:38.787725 systemd-networkd[1457]: vxlan.calico: Gained carrier Jul 11 07:42:39.299814 containerd[1563]: time="2025-07-11T07:42:39.299746299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"65c496c949c2bda502ab88cc5dd456e9de88b3ca1563ec5e64b7f89ad1db85b1\" pid:4549 exit_status:1 exited_at:{seconds:1752219759 nanos:298997449}" Jul 11 07:42:39.644526 containerd[1563]: time="2025-07-11T07:42:39.644415181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667bcfd89f-2krz4,Uid:c1f33f55-a860-486e-bf1a-b91510a46c1d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc\"" Jul 11 07:42:40.057358 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL Jul 11 07:42:40.257731 systemd-networkd[1457]: cali6a0f49af98c: Link UP Jul 11 07:42:40.259945 systemd-networkd[1457]: cali6a0f49af98c: Gained carrier Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.075 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0 goldmane-58fd7646b9- calico-system 4d2ffc01-365d-42db-8763-7ec53842a98f 834 0 2025-07-11 07:41:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal goldmane-58fd7646b9-hczk7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6a0f49af98c [] [] }} ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.075 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.135 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" HandleID="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.135 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" HandleID="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"goldmane-58fd7646b9-hczk7", "timestamp":"2025-07-11 07:42:40.134999555 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.135 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.135 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.135 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.146 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.157 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.164 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.167 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.172 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.172 [INFO][4598] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.176 [INFO][4598] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09 Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.204 [INFO][4598] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.246 [INFO][4598] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.70/26] block=192.168.84.64/26 handle="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.247 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.70/26] handle="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.247 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:40.390431 containerd[1563]: 2025-07-11 07:42:40.247 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.70/26] IPv6=[] ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" HandleID="k8s-pod-network.11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.250 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4d2ffc01-365d-42db-8763-7ec53842a98f", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"goldmane-58fd7646b9-hczk7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6a0f49af98c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.250 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.70/32] ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.250 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a0f49af98c ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.262 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.267 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4d2ffc01-365d-42db-8763-7ec53842a98f", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09", Pod:"goldmane-58fd7646b9-hczk7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6a0f49af98c", MAC:"92:ee:13:0b:2b:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:40.401968 containerd[1563]: 2025-07-11 07:42:40.383 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" Namespace="calico-system" Pod="goldmane-58fd7646b9-hczk7" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-goldmane--58fd7646b9--hczk7-eth0" Jul 11 07:42:40.560851 containerd[1563]: time="2025-07-11T07:42:40.560411841Z" level=info msg="connecting to shim 0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058" address="unix:///run/containerd/s/baf61fd270750b47150684526242e57b11a93ac48c4ce748255853074fd9b55c" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:40.632160 systemd[1]: Started cri-containerd-0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058.scope - libcontainer container 0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058. Jul 11 07:42:40.710845 containerd[1563]: time="2025-07-11T07:42:40.710779696Z" level=info msg="connecting to shim 11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09" address="unix:///run/containerd/s/0dc25196f6dd2385373e3c70d342cc649be638249bbe809c0574bd8325cdf101" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:40.779446 systemd[1]: Started cri-containerd-11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09.scope - libcontainer container 11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09. Jul 11 07:42:40.786078 systemd-networkd[1457]: calif6d1ea7ed5e: Link UP Jul 11 07:42:40.789269 systemd-networkd[1457]: calif6d1ea7ed5e: Gained carrier Jul 11 07:42:40.790954 containerd[1563]: time="2025-07-11T07:42:40.790277939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b74959b8d-5c8n7,Uid:773f7a5a-1981-4bf1-999d-7e5476b2651d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058\"" Jul 11 07:42:40.885170 containerd[1563]: time="2025-07-11T07:42:40.885015211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hczk7,Uid:4d2ffc01-365d-42db-8763-7ec53842a98f,Namespace:calico-system,Attempt:0,} returns sandbox id \"11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09\"" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.518 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0 csi-node-driver- calico-system 3c88e405-5760-45b1-ac61-26a4ddd63df5 689 0 2025-07-11 07:41:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal csi-node-driver-vlrrv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6d1ea7ed5e [] [] }} ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.519 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.606 [INFO][4639] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" HandleID="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.607 [INFO][4639] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" HandleID="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003de530), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"csi-node-driver-vlrrv", "timestamp":"2025-07-11 07:42:40.606425842 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.607 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.607 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.607 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.638 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.654 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.674 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.679 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.688 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.688 [INFO][4639] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.691 [INFO][4639] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5 Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.703 [INFO][4639] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.757 [INFO][4639] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.71/26] block=192.168.84.64/26 handle="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.758 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.71/26] handle="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.758 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:40.914203 containerd[1563]: 2025-07-11 07:42:40.758 [INFO][4639] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.71/26] IPv6=[] ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" HandleID="k8s-pod-network.150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.770 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c88e405-5760-45b1-ac61-26a4ddd63df5", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"csi-node-driver-vlrrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6d1ea7ed5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.771 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.71/32] ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.771 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6d1ea7ed5e ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.791 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.795 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c88e405-5760-45b1-ac61-26a4ddd63df5", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5", Pod:"csi-node-driver-vlrrv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6d1ea7ed5e", MAC:"ca:87:e9:8c:9f:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:40.916926 containerd[1563]: 2025-07-11 07:42:40.911 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" Namespace="calico-system" Pod="csi-node-driver-vlrrv" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-csi--node--driver--vlrrv-eth0" Jul 11 07:42:41.009712 systemd-networkd[1457]: cali46c8baff0fc: Link UP Jul 11 07:42:41.012119 systemd-networkd[1457]: cali46c8baff0fc: Gained carrier Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.565 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0 coredns-7c65d6cfc9- kube-system 53c06b1f-c154-48d5-b67f-3acf36516035 831 0 2025-07-11 07:41:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4392-0-0-n-cdb6f4f5a9.novalocal coredns-7c65d6cfc9-kvgnr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c8baff0fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.566 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.705 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" HandleID="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.707 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" HandleID="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003079c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4392-0-0-n-cdb6f4f5a9.novalocal", "pod":"coredns-7c65d6cfc9-kvgnr", "timestamp":"2025-07-11 07:42:40.70584929 +0000 UTC"}, Hostname:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.707 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.759 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.759 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4392-0-0-n-cdb6f4f5a9.novalocal' Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.778 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.800 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.849 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.908 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.925 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.64/26 host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.925 [INFO][4665] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.84.64/26 handle="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.930 [INFO][4665] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.949 [INFO][4665] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.84.64/26 handle="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.998 [INFO][4665] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.84.72/26] block=192.168.84.64/26 handle="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.998 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.72/26] handle="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" host="ci-4392-0-0-n-cdb6f4f5a9.novalocal" Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.998 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 07:42:41.096679 containerd[1563]: 2025-07-11 07:42:40.998 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.84.72/26] IPv6=[] ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" HandleID="k8s-pod-network.214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Workload="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.001 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c06b1f-c154-48d5-b67f-3acf36516035", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-kvgnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c8baff0fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.001 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.72/32] ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.001 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c8baff0fc ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.012 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.013 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c06b1f-c154-48d5-b67f-3acf36516035", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 7, 41, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4392-0-0-n-cdb6f4f5a9.novalocal", ContainerID:"214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e", Pod:"coredns-7c65d6cfc9-kvgnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c8baff0fc", MAC:"9a:4b:e6:e8:bf:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 07:42:41.098462 containerd[1563]: 2025-07-11 07:42:41.089 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kvgnr" WorkloadEndpoint="ci--4392--0--0--n--cdb6f4f5a9.novalocal-k8s-coredns--7c65d6cfc9--kvgnr-eth0" Jul 11 07:42:41.483701 containerd[1563]: time="2025-07-11T07:42:41.483410238Z" level=info msg="connecting to shim 150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5" address="unix:///run/containerd/s/1c7e8f39a26c2aee8519ff6185f6c165f435a0ff20ad8deec31c688f4f98ed45" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:41.550302 systemd[1]: Started cri-containerd-150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5.scope - libcontainer container 150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5. Jul 11 07:42:41.570926 containerd[1563]: time="2025-07-11T07:42:41.570253900Z" level=info msg="connecting to shim 214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e" address="unix:///run/containerd/s/9bd41df4e5e69b1bbfdc8eb0eba4570077a5cc90145f68e55450ab43670afa82" namespace=k8s.io protocol=ttrpc version=3 Jul 11 07:42:41.616366 systemd[1]: Started cri-containerd-214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e.scope - libcontainer container 214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e. Jul 11 07:42:41.624446 containerd[1563]: time="2025-07-11T07:42:41.624259471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlrrv,Uid:3c88e405-5760-45b1-ac61-26a4ddd63df5,Namespace:calico-system,Attempt:0,} returns sandbox id \"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5\"" Jul 11 07:42:41.694915 containerd[1563]: time="2025-07-11T07:42:41.694786004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kvgnr,Uid:53c06b1f-c154-48d5-b67f-3acf36516035,Namespace:kube-system,Attempt:0,} returns sandbox id \"214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e\"" Jul 11 07:42:41.700871 containerd[1563]: time="2025-07-11T07:42:41.700826851Z" level=info msg="CreateContainer within sandbox \"214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 07:42:41.729007 containerd[1563]: time="2025-07-11T07:42:41.728880460Z" level=info msg="Container b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:42:41.738899 containerd[1563]: time="2025-07-11T07:42:41.738662621Z" level=info msg="CreateContainer within sandbox \"214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa\"" Jul 11 07:42:41.742882 containerd[1563]: time="2025-07-11T07:42:41.742074496Z" level=info msg="StartContainer for \"b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa\"" Jul 11 07:42:41.743271 containerd[1563]: time="2025-07-11T07:42:41.743245221Z" level=info msg="connecting to shim b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa" address="unix:///run/containerd/s/9bd41df4e5e69b1bbfdc8eb0eba4570077a5cc90145f68e55450ab43670afa82" protocol=ttrpc version=3 Jul 11 07:42:41.765181 systemd[1]: Started cri-containerd-b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa.scope - libcontainer container b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa. Jul 11 07:42:41.807987 containerd[1563]: time="2025-07-11T07:42:41.807942286Z" level=info msg="StartContainer for \"b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa\" returns successfully" Jul 11 07:42:41.978731 systemd-networkd[1457]: calif6d1ea7ed5e: Gained IPv6LL Jul 11 07:42:42.233201 systemd-networkd[1457]: cali6a0f49af98c: Gained IPv6LL Jul 11 07:42:42.575142 kubelet[2804]: I0711 07:42:42.573011 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kvgnr" podStartSLOduration=69.572907947 podStartE2EDuration="1m9.572907947s" podCreationTimestamp="2025-07-11 07:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 07:42:42.568927612 +0000 UTC m=+73.801856718" watchObservedRunningTime="2025-07-11 07:42:42.572907947 +0000 UTC m=+73.805837033" Jul 11 07:42:42.681336 systemd-networkd[1457]: cali46c8baff0fc: Gained IPv6LL Jul 11 07:43:04.913372 systemd[1]: cri-containerd-37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420.scope: Deactivated successfully. Jul 11 07:43:04.917525 systemd[1]: cri-containerd-37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420.scope: Consumed 4.953s CPU time, 55.7M memory peak. Jul 11 07:43:04.947410 containerd[1563]: time="2025-07-11T07:43:04.938916633Z" level=info msg="received exit event container_id:\"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" id:\"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" pid:2638 exit_status:1 exited_at:{seconds:1752219784 nanos:935194244}" Jul 11 07:43:04.939469 systemd[1]: cri-containerd-2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584.scope: Deactivated successfully. Jul 11 07:43:04.941604 systemd[1]: cri-containerd-2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584.scope: Consumed 8.085s CPU time, 83.5M memory peak. Jul 11 07:43:04.959784 containerd[1563]: time="2025-07-11T07:43:04.959542888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" id:\"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" pid:2638 exit_status:1 exited_at:{seconds:1752219784 nanos:935194244}" Jul 11 07:43:04.960273 containerd[1563]: time="2025-07-11T07:43:04.960227015Z" level=info msg="received exit event container_id:\"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" id:\"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" pid:3123 exit_status:1 exited_at:{seconds:1752219784 nanos:943950409}" Jul 11 07:43:04.961706 containerd[1563]: time="2025-07-11T07:43:04.961552777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" id:\"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" pid:3123 exit_status:1 exited_at:{seconds:1752219784 nanos:943950409}" Jul 11 07:43:06.551043 kubelet[2804]: E0711 07:43:06.550052 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:43:07.780774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584-rootfs.mount: Deactivated successfully. Jul 11 07:43:07.798108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420-rootfs.mount: Deactivated successfully. Jul 11 07:43:07.840845 systemd[1]: cri-containerd-60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61.scope: Deactivated successfully. Jul 11 07:43:07.842106 systemd[1]: cri-containerd-60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61.scope: Consumed 2.202s CPU time, 18.8M memory peak. Jul 11 07:43:07.847790 containerd[1563]: time="2025-07-11T07:43:07.847173523Z" level=info msg="received exit event container_id:\"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" id:\"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" pid:2644 exit_status:1 exited_at:{seconds:1752219787 nanos:844845075}" Jul 11 07:43:07.849424 containerd[1563]: time="2025-07-11T07:43:07.849280725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" id:\"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" pid:2644 exit_status:1 exited_at:{seconds:1752219787 nanos:844845075}" Jul 11 07:43:07.854594 kubelet[2804]: E0711 07:43:07.854467 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:43:07.912011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61-rootfs.mount: Deactivated successfully. Jul 11 07:43:09.308235 containerd[1563]: time="2025-07-11T07:43:09.308173157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"feb7cc41aff3e3223a63380a20bebc0934eea2bab980b71031c8b47332e4c840\" pid:4978 exited_at:{seconds:1752219789 nanos:307178307}" Jul 11 07:43:10.454674 containerd[1563]: time="2025-07-11T07:43:10.454567913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:10.468786 containerd[1563]: time="2025-07-11T07:43:10.468706843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 07:43:10.565903 containerd[1563]: time="2025-07-11T07:43:10.565818872Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:10.621954 containerd[1563]: time="2025-07-11T07:43:10.621894687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:10.624859 kubelet[2804]: I0711 07:43:10.624781 2804 scope.go:117] "RemoveContainer" containerID="37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420" Jul 11 07:43:10.626403 containerd[1563]: time="2025-07-11T07:43:10.626300621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 41.625054202s" Jul 11 07:43:10.626772 containerd[1563]: time="2025-07-11T07:43:10.626649806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 07:43:10.632506 kubelet[2804]: I0711 07:43:10.632294 2804 scope.go:117] "RemoveContainer" containerID="2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584" Jul 11 07:43:10.634818 containerd[1563]: time="2025-07-11T07:43:10.634761907Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 11 07:43:10.636180 containerd[1563]: time="2025-07-11T07:43:10.636129548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 07:43:10.644171 containerd[1563]: time="2025-07-11T07:43:10.644097386Z" level=info msg="CreateContainer within sandbox \"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 07:43:10.649108 containerd[1563]: time="2025-07-11T07:43:10.648191964Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 11 07:43:10.655435 kubelet[2804]: I0711 07:43:10.655334 2804 scope.go:117] "RemoveContainer" containerID="60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61" Jul 11 07:43:10.672509 containerd[1563]: time="2025-07-11T07:43:10.672426882Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 11 07:43:10.759476 containerd[1563]: time="2025-07-11T07:43:10.759343766Z" level=info msg="Container f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:10.795952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591382598.mount: Deactivated successfully. Jul 11 07:43:10.802384 containerd[1563]: time="2025-07-11T07:43:10.802327029Z" level=info msg="Container e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:10.826302 containerd[1563]: time="2025-07-11T07:43:10.826173808Z" level=info msg="Container 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:10.967823 containerd[1563]: time="2025-07-11T07:43:10.966435343Z" level=info msg="Container 8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:10.983167 containerd[1563]: time="2025-07-11T07:43:10.983100291Z" level=info msg="CreateContainer within sandbox \"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\"" Jul 11 07:43:10.984694 containerd[1563]: time="2025-07-11T07:43:10.984622764Z" level=info msg="StartContainer for \"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\"" Jul 11 07:43:10.989637 containerd[1563]: time="2025-07-11T07:43:10.989564363Z" level=info msg="connecting to shim f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f" address="unix:///run/containerd/s/226195f79f9367007d54237241d1ac09520d9874fd77a6e8c2a910968757a3b0" protocol=ttrpc version=3 Jul 11 07:43:11.024034 containerd[1563]: time="2025-07-11T07:43:11.023790372Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\"" Jul 11 07:43:11.026381 containerd[1563]: time="2025-07-11T07:43:11.026238064Z" level=info msg="StartContainer for \"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\"" Jul 11 07:43:11.045177 containerd[1563]: time="2025-07-11T07:43:11.045093889Z" level=info msg="connecting to shim e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:43:11.060272 containerd[1563]: time="2025-07-11T07:43:11.060139021Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\"" Jul 11 07:43:11.066818 containerd[1563]: time="2025-07-11T07:43:11.063255540Z" level=info msg="StartContainer for \"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\"" Jul 11 07:43:11.061465 systemd[1]: Started cri-containerd-f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f.scope - libcontainer container f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f. Jul 11 07:43:11.067571 containerd[1563]: time="2025-07-11T07:43:11.067527289Z" level=info msg="connecting to shim 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:43:11.122886 containerd[1563]: time="2025-07-11T07:43:11.122822875Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\"" Jul 11 07:43:11.135688 containerd[1563]: time="2025-07-11T07:43:11.126666481Z" level=info msg="StartContainer for \"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\"" Jul 11 07:43:11.141889 containerd[1563]: time="2025-07-11T07:43:11.141790110Z" level=info msg="connecting to shim 8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:43:11.149525 systemd[1]: Started cri-containerd-e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c.scope - libcontainer container e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c. Jul 11 07:43:11.159593 systemd[1]: Started cri-containerd-5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05.scope - libcontainer container 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05. Jul 11 07:43:11.219946 systemd[1]: Started cri-containerd-8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2.scope - libcontainer container 8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2. Jul 11 07:43:11.399653 containerd[1563]: time="2025-07-11T07:43:11.399483045Z" level=info msg="StartContainer for \"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" returns successfully" Jul 11 07:43:11.406457 containerd[1563]: time="2025-07-11T07:43:11.406212705Z" level=info msg="StartContainer for \"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" returns successfully" Jul 11 07:43:11.483709 containerd[1563]: time="2025-07-11T07:43:11.483617723Z" level=info msg="StartContainer for \"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" returns successfully" Jul 11 07:43:11.485666 containerd[1563]: time="2025-07-11T07:43:11.485623983Z" level=info msg="StartContainer for \"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" returns successfully" Jul 11 07:43:11.742849 kubelet[2804]: I0711 07:43:11.742523 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667bcfd89f-qbsvk" podStartSLOduration=45.110877384 podStartE2EDuration="1m26.742473694s" podCreationTimestamp="2025-07-11 07:41:45 +0000 UTC" firstStartedPulling="2025-07-11 07:42:28.999434161 +0000 UTC m=+60.232363237" lastFinishedPulling="2025-07-11 07:43:10.631030471 +0000 UTC m=+101.863959547" observedRunningTime="2025-07-11 07:43:11.742236648 +0000 UTC m=+102.975165754" watchObservedRunningTime="2025-07-11 07:43:11.742473694 +0000 UTC m=+102.975402780" Jul 11 07:43:11.770943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520017329.mount: Deactivated successfully. Jul 11 07:43:21.307319 systemd[1]: cri-containerd-5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05.scope: Deactivated successfully. Jul 11 07:43:21.316505 containerd[1563]: time="2025-07-11T07:43:21.316323205Z" level=info msg="received exit event container_id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" pid:5047 exit_status:1 exited_at:{seconds:1752219801 nanos:314905381}" Jul 11 07:43:21.317945 containerd[1563]: time="2025-07-11T07:43:21.317603040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" pid:5047 exit_status:1 exited_at:{seconds:1752219801 nanos:314905381}" Jul 11 07:43:21.380426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05-rootfs.mount: Deactivated successfully. Jul 11 07:43:21.493956 systemd[1]: cri-containerd-f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f.scope: Deactivated successfully. Jul 11 07:43:21.500912 containerd[1563]: time="2025-07-11T07:43:21.500832312Z" level=info msg="received exit event container_id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" pid:5009 exit_status:1 exited_at:{seconds:1752219801 nanos:499161712}" Jul 11 07:43:21.501129 containerd[1563]: time="2025-07-11T07:43:21.500924295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" pid:5009 exit_status:1 exited_at:{seconds:1752219801 nanos:499161712}" Jul 11 07:43:21.530906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f-rootfs.mount: Deactivated successfully. Jul 11 07:43:25.494085 kubelet[2804]: E0711 07:43:25.492562 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:43:38.523226 kubelet[2804]: E0711 07:43:38.507463 2804 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": context deadline exceeded" Jul 11 07:43:38.523226 kubelet[2804]: E0711 07:43:24.706537 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{calico-apiserver-667bcfd89f-qbsvk.1851229f51e0d1be calico-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-667bcfd89f-qbsvk,UID:4abaf656-f2e8-4404-bfd1-0657de6a798a,APIVersion:v1,ResourceVersion:816,FieldPath:spec.containers{calico-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.84.65:5443/readyz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:43:17.692666302 +0000 UTC m=+108.925595468,LastTimestamp:2025-07-11 07:43:17.692666302 +0000 UTC m=+108.925595468,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:31.317514801Z" level=error msg="failed to handle container TaskExit event container_id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" pid:5047 exit_status:1 exited_at:{seconds:1752219801 nanos:314905381}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:31.501569457Z" level=error msg="failed to handle container TaskExit event container_id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" pid:5009 exit_status:1 exited_at:{seconds:1752219801 nanos:499161712}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:32.489514275Z" level=info msg="TaskExit event container_id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" id:\"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" pid:5047 exit_status:1 exited_at:{seconds:1752219801 nanos:314905381}" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:34.490503065Z" level=error msg="get state for 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05" error="context deadline exceeded" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:34.490632207Z" level=warning msg="unknown status" status=0 Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:36.492215656Z" level=error msg="get state for 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05" error="context deadline exceeded" Jul 11 07:43:38.596125 containerd[1563]: time="2025-07-11T07:43:36.492344678Z" level=warning msg="unknown status" status=0 Jul 11 07:43:38.625725 kubelet[2804]: I0711 07:43:38.529918 2804 status_manager.go:851] "Failed to get status for pod" podUID="1b42591ea292e73e5775e231f0503337" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="etcdserver: request timed out" Jul 11 07:43:38.625725 kubelet[2804]: E0711 07:43:38.583633 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:43:39.111849 containerd[1563]: time="2025-07-11T07:43:39.111717303Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Jul 11 07:43:39.113076 containerd[1563]: time="2025-07-11T07:43:39.113023647Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 11 07:43:39.113346 containerd[1563]: time="2025-07-11T07:43:39.113324692Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Jul 11 07:43:39.113739 containerd[1563]: time="2025-07-11T07:43:39.112622644Z" level=error msg="ttrpc: received message on inactive stream" stream=33 Jul 11 07:43:39.116672 containerd[1563]: time="2025-07-11T07:43:39.116643757Z" level=info msg="Ensure that container 5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05 in task-service has been cleanup successfully" Jul 11 07:43:39.192487 containerd[1563]: time="2025-07-11T07:43:39.192057599Z" level=info msg="TaskExit event container_id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" id:\"f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f\" pid:5009 exit_status:1 exited_at:{seconds:1752219801 nanos:499161712}" Jul 11 07:43:39.208096 containerd[1563]: time="2025-07-11T07:43:39.208019264Z" level=info msg="Ensure that container f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f in task-service has been cleanup successfully" Jul 11 07:43:39.418040 containerd[1563]: time="2025-07-11T07:43:39.417731903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"4253af6ae8a704d8cf6ea31ef13e9dc8474d19654fe1c995afc3e79091acf489\" pid:5175 exited_at:{seconds:1752219819 nanos:415264328}" Jul 11 07:43:39.518999 kubelet[2804]: I0711 07:43:39.518892 2804 scope.go:117] "RemoveContainer" containerID="2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584" Jul 11 07:43:39.521059 kubelet[2804]: I0711 07:43:39.519952 2804 scope.go:117] "RemoveContainer" containerID="5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05" Jul 11 07:43:39.529428 kubelet[2804]: I0711 07:43:39.529385 2804 scope.go:117] "RemoveContainer" containerID="f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f" Jul 11 07:43:39.531008 containerd[1563]: time="2025-07-11T07:43:39.530772760Z" level=info msg="RemoveContainer for \"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\"" Jul 11 07:43:39.534956 containerd[1563]: time="2025-07-11T07:43:39.534902317Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 11 07:43:39.552045 containerd[1563]: time="2025-07-11T07:43:39.551988435Z" level=info msg="CreateContainer within sandbox \"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:1,}" Jul 11 07:43:44.128259 containerd[1563]: time="2025-07-11T07:43:44.128128162Z" level=info msg="RemoveContainer for \"2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584\" returns successfully" Jul 11 07:43:44.189515 containerd[1563]: time="2025-07-11T07:43:44.188425427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:44.227430 containerd[1563]: time="2025-07-11T07:43:44.227341208Z" level=info msg="Container 13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:44.259495 containerd[1563]: time="2025-07-11T07:43:44.259380322Z" level=info msg="Container 1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:44.303572 containerd[1563]: time="2025-07-11T07:43:44.303483457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 07:43:44.375017 containerd[1563]: time="2025-07-11T07:43:44.374833955Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:44.417709 containerd[1563]: time="2025-07-11T07:43:44.417018928Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\"" Jul 11 07:43:44.420401 containerd[1563]: time="2025-07-11T07:43:44.420309379Z" level=info msg="StartContainer for \"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\"" Jul 11 07:43:44.423965 containerd[1563]: time="2025-07-11T07:43:44.423759489Z" level=info msg="connecting to shim 13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:43:44.426292 containerd[1563]: time="2025-07-11T07:43:44.426187751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:44.431998 containerd[1563]: time="2025-07-11T07:43:44.431406194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 33.794734887s" Jul 11 07:43:44.432183 containerd[1563]: time="2025-07-11T07:43:44.432037198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 07:43:44.433343 containerd[1563]: time="2025-07-11T07:43:44.433279492Z" level=info msg="CreateContainer within sandbox \"606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:1,} returns container id \"1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e\"" Jul 11 07:43:44.439424 containerd[1563]: time="2025-07-11T07:43:44.439363220Z" level=info msg="StartContainer for \"1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e\"" Jul 11 07:43:44.444295 containerd[1563]: time="2025-07-11T07:43:44.443884693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 07:43:44.449961 containerd[1563]: time="2025-07-11T07:43:44.449872451Z" level=info msg="connecting to shim 1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e" address="unix:///run/containerd/s/226195f79f9367007d54237241d1ac09520d9874fd77a6e8c2a910968757a3b0" protocol=ttrpc version=3 Jul 11 07:43:44.503833 containerd[1563]: time="2025-07-11T07:43:44.503756249Z" level=info msg="CreateContainer within sandbox \"90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 07:43:44.536823 systemd[1]: Started cri-containerd-1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e.scope - libcontainer container 1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e. Jul 11 07:43:44.588001 containerd[1563]: time="2025-07-11T07:43:44.587282309Z" level=info msg="Container b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:44.594480 systemd[1]: Started cri-containerd-13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730.scope - libcontainer container 13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730. Jul 11 07:43:44.667200 containerd[1563]: time="2025-07-11T07:43:44.667130880Z" level=info msg="CreateContainer within sandbox \"90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\"" Jul 11 07:43:44.668752 containerd[1563]: time="2025-07-11T07:43:44.668347705Z" level=info msg="StartContainer for \"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\"" Jul 11 07:43:44.674188 containerd[1563]: time="2025-07-11T07:43:44.673580385Z" level=info msg="connecting to shim b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba" address="unix:///run/containerd/s/eaef16410ebcc86d718923e6bfe3fb64b183e837f5ccaa4718fa04f05ef9a95b" protocol=ttrpc version=3 Jul 11 07:43:44.742411 systemd[1]: Started cri-containerd-b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba.scope - libcontainer container b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba. Jul 11 07:43:44.826955 containerd[1563]: time="2025-07-11T07:43:44.826713137Z" level=info msg="StartContainer for \"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" returns successfully" Jul 11 07:43:44.874066 containerd[1563]: time="2025-07-11T07:43:44.873581446Z" level=info msg="StartContainer for \"1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e\" returns successfully" Jul 11 07:43:45.043762 containerd[1563]: time="2025-07-11T07:43:45.043132298Z" level=info msg="StartContainer for \"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" returns successfully" Jul 11 07:43:45.221938 containerd[1563]: time="2025-07-11T07:43:45.221837031Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:43:45.240026 containerd[1563]: time="2025-07-11T07:43:45.239184306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 07:43:45.243832 containerd[1563]: time="2025-07-11T07:43:45.243201743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 799.240376ms" Jul 11 07:43:45.243832 containerd[1563]: time="2025-07-11T07:43:45.243307341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 07:43:45.247605 containerd[1563]: time="2025-07-11T07:43:45.247302375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 07:43:45.258306 containerd[1563]: time="2025-07-11T07:43:45.258260189Z" level=info msg="CreateContainer within sandbox \"5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 07:43:45.348016 containerd[1563]: time="2025-07-11T07:43:45.347173546Z" level=info msg="Container 7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:43:45.358751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155051792.mount: Deactivated successfully. Jul 11 07:43:45.746441 containerd[1563]: time="2025-07-11T07:43:45.746264017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"25b18269d34a08fbd05256d44d5d8197bfbf7bcfa099ad8d90d764c6f4f98a84\" pid:5308 exit_status:1 exited_at:{seconds:1752219825 nanos:745383423}" Jul 11 07:43:46.757611 containerd[1563]: time="2025-07-11T07:43:46.757502585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"e6f2b76f1e193c9094062ce19907482525159dfe55e7ce393d92fdcd5bb19d42\" pid:5331 exit_status:1 exited_at:{seconds:1752219826 nanos:755949027}" Jul 11 07:43:52.758757 kubelet[2804]: E0711 07:43:52.758433 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event=< Jul 11 07:43:52.758757 kubelet[2804]: &Event{ObjectMeta:{calico-kube-controllers-8644849955-pzffc.185122a5da361257 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-8644849955-pzffc,UID:40383d9a-5fd3-45e2-be69-48ac62030be0,APIVersion:v1,ResourceVersion:815,FieldPath:spec.containers{calico-kube-controllers},},Reason:Unhealthy,Message:Readiness probe failed: initialized to false; initialized to false Jul 11 07:43:52.758757 kubelet[2804]: ,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:43:45.749758551 +0000 UTC m=+136.982687637,LastTimestamp:2025-07-11 07:43:45.749758551 +0000 UTC m=+136.982687637,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,} Jul 11 07:43:52.758757 kubelet[2804]: > Jul 11 07:43:53.380202 systemd[1]: cri-containerd-13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730.scope: Deactivated successfully. Jul 11 07:43:56.273884 containerd[1563]: time="2025-07-11T07:43:56.242583402Z" level=info msg="received exit event container_id:\"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" id:\"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" pid:5215 exit_status:1 exited_at:{seconds:1752219833 nanos:389137813}" Jul 11 07:43:56.275791 kubelet[2804]: E0711 07:43:56.267703 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:43:56.283357 containerd[1563]: time="2025-07-11T07:43:56.281295932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" id:\"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" pid:5215 exit_status:1 exited_at:{seconds:1752219833 nanos:389137813}" Jul 11 07:43:56.324386 kubelet[2804]: E0711 07:43:56.324306 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.172s" Jul 11 07:43:56.339592 systemd[1]: cri-containerd-e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c.scope: Deactivated successfully. Jul 11 07:43:56.342363 systemd[1]: cri-containerd-e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c.scope: Consumed 1.965s CPU time, 49.8M memory peak, 1.3M read from disk. Jul 11 07:43:56.743560 containerd[1563]: time="2025-07-11T07:43:56.743434476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" id:\"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" pid:5036 exit_status:1 exited_at:{seconds:1752219836 nanos:350646768}" Jul 11 07:43:56.744753 containerd[1563]: time="2025-07-11T07:43:56.744234238Z" level=info msg="received exit event container_id:\"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" id:\"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" pid:5036 exit_status:1 exited_at:{seconds:1752219836 nanos:350646768}" Jul 11 07:43:56.788705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730-rootfs.mount: Deactivated successfully. Jul 11 07:43:57.144595 kubelet[2804]: E0711 07:43:57.144481 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:43:57.197095 systemd[1]: cri-containerd-8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2.scope: Deactivated successfully. Jul 11 07:43:57.199177 systemd[1]: cri-containerd-8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2.scope: Consumed 2.268s CPU time, 17.3M memory peak, 1.4M read from disk. Jul 11 07:43:57.206664 containerd[1563]: time="2025-07-11T07:43:57.206515338Z" level=info msg="received exit event container_id:\"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" id:\"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" pid:5061 exit_status:1 exited_at:{seconds:1752219837 nanos:206102954}" Jul 11 07:43:57.210696 containerd[1563]: time="2025-07-11T07:43:57.209759051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" id:\"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" pid:5061 exit_status:1 exited_at:{seconds:1752219837 nanos:206102954}" Jul 11 07:43:57.248261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c-rootfs.mount: Deactivated successfully. Jul 11 07:43:57.295725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2-rootfs.mount: Deactivated successfully. Jul 11 07:43:57.361432 containerd[1563]: time="2025-07-11T07:43:57.361165311Z" level=info msg="CreateContainer within sandbox \"5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714\"" Jul 11 07:43:57.364014 containerd[1563]: time="2025-07-11T07:43:57.363874449Z" level=info msg="StartContainer for \"7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714\"" Jul 11 07:43:57.366169 containerd[1563]: time="2025-07-11T07:43:57.366043934Z" level=info msg="connecting to shim 7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714" address="unix:///run/containerd/s/158e4d434add41bf5846dc122713d9c809826a17586c2142ea7bdc9bbb08c3b9" protocol=ttrpc version=3 Jul 11 07:43:57.401421 systemd[1]: Started cri-containerd-7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714.scope - libcontainer container 7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714. Jul 11 07:43:57.919780 containerd[1563]: time="2025-07-11T07:43:57.919667719Z" level=info msg="StartContainer for \"7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714\" returns successfully" Jul 11 07:43:57.959178 kubelet[2804]: I0711 07:43:57.959036 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8644849955-pzffc" podStartSLOduration=59.18881241 podStartE2EDuration="2m5.958962963s" podCreationTimestamp="2025-07-11 07:41:52 +0000 UTC" firstStartedPulling="2025-07-11 07:42:37.67328128 +0000 UTC m=+68.906210406" lastFinishedPulling="2025-07-11 07:43:44.443431853 +0000 UTC m=+135.676360959" observedRunningTime="2025-07-11 07:43:57.866953161 +0000 UTC m=+149.099882247" watchObservedRunningTime="2025-07-11 07:43:57.958962963 +0000 UTC m=+149.191892049" Jul 11 07:43:58.403272 kubelet[2804]: I0711 07:43:58.402835 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667bcfd89f-2krz4" podStartSLOduration=67.813757033 podStartE2EDuration="2m13.402813827s" podCreationTimestamp="2025-07-11 07:41:45 +0000 UTC" firstStartedPulling="2025-07-11 07:42:39.65654631 +0000 UTC m=+70.889475436" lastFinishedPulling="2025-07-11 07:43:45.245603154 +0000 UTC m=+136.478532230" observedRunningTime="2025-07-11 07:43:58.402012892 +0000 UTC m=+149.634941989" watchObservedRunningTime="2025-07-11 07:43:58.402813827 +0000 UTC m=+149.635742903" Jul 11 07:43:59.326553 kubelet[2804]: I0711 07:43:59.325914 2804 scope.go:117] "RemoveContainer" containerID="60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61" Jul 11 07:43:59.328816 kubelet[2804]: I0711 07:43:59.328765 2804 scope.go:117] "RemoveContainer" containerID="8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" Jul 11 07:43:59.333023 kubelet[2804]: E0711 07:43:59.331132 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:43:59.334892 containerd[1563]: time="2025-07-11T07:43:59.334443039Z" level=info msg="RemoveContainer for \"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\"" Jul 11 07:43:59.353728 kubelet[2804]: I0711 07:43:59.353200 2804 scope.go:117] "RemoveContainer" containerID="e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" Jul 11 07:43:59.353728 kubelet[2804]: E0711 07:43:59.353688 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:43:59.364304 kubelet[2804]: I0711 07:43:59.362349 2804 scope.go:117] "RemoveContainer" containerID="13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730" Jul 11 07:43:59.364304 kubelet[2804]: E0711 07:43:59.364221 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:43:59.425776 containerd[1563]: time="2025-07-11T07:43:59.425707661Z" level=info msg="RemoveContainer for \"60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61\" returns successfully" Jul 11 07:43:59.426635 kubelet[2804]: I0711 07:43:59.426589 2804 scope.go:117] "RemoveContainer" containerID="37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420" Jul 11 07:43:59.430701 containerd[1563]: time="2025-07-11T07:43:59.430638261Z" level=info msg="RemoveContainer for \"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\"" Jul 11 07:43:59.463449 containerd[1563]: time="2025-07-11T07:43:59.463347784Z" level=info msg="RemoveContainer for \"37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420\" returns successfully" Jul 11 07:43:59.464048 kubelet[2804]: I0711 07:43:59.463830 2804 scope.go:117] "RemoveContainer" containerID="5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05" Jul 11 07:43:59.467938 containerd[1563]: time="2025-07-11T07:43:59.467868345Z" level=info msg="RemoveContainer for \"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\"" Jul 11 07:43:59.511313 containerd[1563]: time="2025-07-11T07:43:59.510802573Z" level=info msg="RemoveContainer for \"5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05\" returns successfully" Jul 11 07:44:00.240311 containerd[1563]: time="2025-07-11T07:44:00.240199068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:00.247670 containerd[1563]: time="2025-07-11T07:44:00.247618442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 07:44:00.255476 containerd[1563]: time="2025-07-11T07:44:00.255422119Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:00.267019 containerd[1563]: time="2025-07-11T07:44:00.265870122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:00.269395 containerd[1563]: time="2025-07-11T07:44:00.269161734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 15.021805879s" Jul 11 07:44:00.269395 containerd[1563]: time="2025-07-11T07:44:00.269250411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 07:44:00.274483 containerd[1563]: time="2025-07-11T07:44:00.273421645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 07:44:00.280053 containerd[1563]: time="2025-07-11T07:44:00.279386528Z" level=info msg="CreateContainer within sandbox \"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 07:44:00.314158 containerd[1563]: time="2025-07-11T07:44:00.314082071Z" level=info msg="Container 8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:00.360048 containerd[1563]: time="2025-07-11T07:44:00.359993540Z" level=info msg="CreateContainer within sandbox \"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6\"" Jul 11 07:44:00.364116 containerd[1563]: time="2025-07-11T07:44:00.364025082Z" level=info msg="StartContainer for \"8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6\"" Jul 11 07:44:00.368770 containerd[1563]: time="2025-07-11T07:44:00.368714619Z" level=info msg="connecting to shim 8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6" address="unix:///run/containerd/s/baf61fd270750b47150684526242e57b11a93ac48c4ce748255853074fd9b55c" protocol=ttrpc version=3 Jul 11 07:44:00.384345 kubelet[2804]: I0711 07:44:00.383528 2804 scope.go:117] "RemoveContainer" containerID="8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" Jul 11 07:44:00.388401 kubelet[2804]: E0711 07:44:00.383749 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:44:00.407006 kubelet[2804]: I0711 07:44:00.406595 2804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 07:44:00.449310 systemd[1]: Started cri-containerd-8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6.scope - libcontainer container 8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6. Jul 11 07:44:00.588282 containerd[1563]: time="2025-07-11T07:44:00.588128944Z" level=info msg="StartContainer for \"8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6\" returns successfully" Jul 11 07:44:01.420255 kubelet[2804]: I0711 07:44:01.420069 2804 scope.go:117] "RemoveContainer" containerID="8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" Jul 11 07:44:01.421704 kubelet[2804]: E0711 07:44:01.420617 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:44:02.188029 kubelet[2804]: I0711 07:44:02.187178 2804 scope.go:117] "RemoveContainer" containerID="e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" Jul 11 07:44:02.188029 kubelet[2804]: E0711 07:44:02.187646 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:44:02.613076 kubelet[2804]: I0711 07:44:02.612430 2804 scope.go:117] "RemoveContainer" containerID="e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" Jul 11 07:44:02.613076 kubelet[2804]: E0711 07:44:02.612617 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:44:04.424327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153361229.mount: Deactivated successfully. Jul 11 07:44:05.271461 containerd[1563]: time="2025-07-11T07:44:05.271369523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:05.276622 containerd[1563]: time="2025-07-11T07:44:05.276380153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 07:44:05.298482 containerd[1563]: time="2025-07-11T07:44:05.297528001Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:05.310244 containerd[1563]: time="2025-07-11T07:44:05.310164364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:05.312873 containerd[1563]: time="2025-07-11T07:44:05.312771780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.039236872s" Jul 11 07:44:05.313074 containerd[1563]: time="2025-07-11T07:44:05.312894190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 07:44:05.317156 containerd[1563]: time="2025-07-11T07:44:05.317065114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 07:44:05.325011 containerd[1563]: time="2025-07-11T07:44:05.324826290Z" level=info msg="CreateContainer within sandbox \"11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 07:44:05.368019 containerd[1563]: time="2025-07-11T07:44:05.366314989Z" level=info msg="Container d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:05.423323 containerd[1563]: time="2025-07-11T07:44:05.423166051Z" level=info msg="CreateContainer within sandbox \"11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\"" Jul 11 07:44:05.427759 containerd[1563]: time="2025-07-11T07:44:05.427677823Z" level=info msg="StartContainer for \"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\"" Jul 11 07:44:05.436088 containerd[1563]: time="2025-07-11T07:44:05.435994714Z" level=info msg="connecting to shim d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77" address="unix:///run/containerd/s/0dc25196f6dd2385373e3c70d342cc649be638249bbe809c0574bd8325cdf101" protocol=ttrpc version=3 Jul 11 07:44:05.508345 systemd[1]: Started cri-containerd-d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77.scope - libcontainer container d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77. Jul 11 07:44:05.601267 containerd[1563]: time="2025-07-11T07:44:05.601190065Z" level=info msg="StartContainer for \"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" returns successfully" Jul 11 07:44:06.586611 kubelet[2804]: I0711 07:44:06.586285 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-hczk7" podStartSLOduration=51.156782164 podStartE2EDuration="2m15.58624585s" podCreationTimestamp="2025-07-11 07:41:51 +0000 UTC" firstStartedPulling="2025-07-11 07:42:40.886906072 +0000 UTC m=+72.119835148" lastFinishedPulling="2025-07-11 07:44:05.316369728 +0000 UTC m=+156.549298834" observedRunningTime="2025-07-11 07:44:06.581071441 +0000 UTC m=+157.814000567" watchObservedRunningTime="2025-07-11 07:44:06.58624585 +0000 UTC m=+157.819174926" Jul 11 07:44:06.678468 containerd[1563]: time="2025-07-11T07:44:06.678372370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"2d94dda8ede626b9068c9967f2c3149069f9e4edf71b3328bc883f784b84f2e8\" pid:5518 exit_status:1 exited_at:{seconds:1752219846 nanos:677335293}" Jul 11 07:44:07.781366 containerd[1563]: time="2025-07-11T07:44:07.781107851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"f68556ae26e3aeab497b5baaeb92b02622080f36f5b386a78240025f2c0ceca7\" pid:5545 exit_status:1 exited_at:{seconds:1752219847 nanos:778894285}" Jul 11 07:44:07.833800 containerd[1563]: time="2025-07-11T07:44:07.833598112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:07.835212 containerd[1563]: time="2025-07-11T07:44:07.835180824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 07:44:07.836954 containerd[1563]: time="2025-07-11T07:44:07.836903891Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:07.841121 containerd[1563]: time="2025-07-11T07:44:07.841038074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:07.842920 containerd[1563]: time="2025-07-11T07:44:07.842748758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.525611559s" Jul 11 07:44:07.842920 containerd[1563]: time="2025-07-11T07:44:07.842806135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 07:44:07.844374 containerd[1563]: time="2025-07-11T07:44:07.844192248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 07:44:07.849205 containerd[1563]: time="2025-07-11T07:44:07.848504757Z" level=info msg="CreateContainer within sandbox \"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 07:44:07.871177 containerd[1563]: time="2025-07-11T07:44:07.869127989Z" level=info msg="Container cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:07.888601 containerd[1563]: time="2025-07-11T07:44:07.888463703Z" level=info msg="CreateContainer within sandbox \"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f\"" Jul 11 07:44:07.889572 containerd[1563]: time="2025-07-11T07:44:07.889426141Z" level=info msg="StartContainer for \"cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f\"" Jul 11 07:44:07.894509 containerd[1563]: time="2025-07-11T07:44:07.893778725Z" level=info msg="connecting to shim cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f" address="unix:///run/containerd/s/1c7e8f39a26c2aee8519ff6185f6c165f435a0ff20ad8deec31c688f4f98ed45" protocol=ttrpc version=3 Jul 11 07:44:07.970794 systemd[1]: Started cri-containerd-cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f.scope - libcontainer container cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f. Jul 11 07:44:08.228626 containerd[1563]: time="2025-07-11T07:44:08.228570893Z" level=info msg="StartContainer for \"cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f\" returns successfully" Jul 11 07:44:08.705735 containerd[1563]: time="2025-07-11T07:44:08.705646517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"2c6d1e59386f657f1d8bfe69769941fe831be4af644c1eb52c0004a5e098f048\" pid:5596 exit_status:1 exited_at:{seconds:1752219848 nanos:704359190}" Jul 11 07:44:09.337160 containerd[1563]: time="2025-07-11T07:44:09.337063939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"d167f03fef1ec6e2234c0671788a7adffc143ed0b088e92908797e2ecbbaf8e9\" pid:5622 exited_at:{seconds:1752219849 nanos:336132650}" Jul 11 07:44:11.822850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314836161.mount: Deactivated successfully. Jul 11 07:44:19.529075 kubelet[2804]: I0711 07:44:15.152118 2804 scope.go:117] "RemoveContainer" containerID="8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" Jul 11 07:44:19.529075 kubelet[2804]: I0711 07:44:15.154232 2804 scope.go:117] "RemoveContainer" containerID="13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730" Jul 11 07:44:19.529075 kubelet[2804]: I0711 07:44:16.151411 2804 scope.go:117] "RemoveContainer" containerID="e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" Jul 11 07:44:19.530452 containerd[1563]: time="2025-07-11T07:44:19.467179415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"a4b80d81ed13a30542b903481e86ab0d38e273f547c09430262f170fb6f5d91b\" pid:5659 exited_at:{seconds:1752219859 nanos:433468610}" Jul 11 07:44:19.530452 containerd[1563]: time="2025-07-11T07:44:19.467317100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"e6bb0272b5c753bb6724a64934478edb8d55c345ad4368561e54c3c749205afd\" pid:5664 exit_status:137 exited_at:{seconds:1752219859 nanos:445366683}" Jul 11 07:44:19.596503 containerd[1563]: time="2025-07-11T07:44:19.596338053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"11ff9ffeee45f78166440a96e2dfc07f04138cd0bef986d22228fb5af20ef6a9\" pid:5702 exited_at:{seconds:1752219859 nanos:517352973}" Jul 11 07:44:20.030852 containerd[1563]: time="2025-07-11T07:44:20.030694702Z" level=error msg="ExecSync for \"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 11 07:44:20.032580 kubelet[2804]: E0711 07:44:20.031798 2804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77" cmd=["/health","-ready"] Jul 11 07:44:20.064387 containerd[1563]: time="2025-07-11T07:44:20.064250919Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jul 11 07:44:20.133086 containerd[1563]: time="2025-07-11T07:44:20.132448229Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Jul 11 07:44:20.248813 containerd[1563]: time="2025-07-11T07:44:20.248665915Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jul 11 07:44:21.339062 containerd[1563]: time="2025-07-11T07:44:21.337842754Z" level=info msg="Container af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:21.364477 containerd[1563]: time="2025-07-11T07:44:21.364396501Z" level=info msg="Container 111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:21.374465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882037712.mount: Deactivated successfully. Jul 11 07:44:21.465335 containerd[1563]: time="2025-07-11T07:44:21.464268358Z" level=info msg="Container 50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:21.530543 containerd[1563]: time="2025-07-11T07:44:21.530477905Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\"" Jul 11 07:44:21.532092 containerd[1563]: time="2025-07-11T07:44:21.532061739Z" level=info msg="StartContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\"" Jul 11 07:44:21.543365 containerd[1563]: time="2025-07-11T07:44:21.543163948Z" level=info msg="connecting to shim af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:44:21.554071 containerd[1563]: time="2025-07-11T07:44:21.553608983Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\"" Jul 11 07:44:21.563356 containerd[1563]: time="2025-07-11T07:44:21.563299599Z" level=info msg="StartContainer for \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\"" Jul 11 07:44:21.571008 containerd[1563]: time="2025-07-11T07:44:21.570059021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:21.580060 containerd[1563]: time="2025-07-11T07:44:21.579963107Z" level=info msg="connecting to shim 111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:44:21.586655 containerd[1563]: time="2025-07-11T07:44:21.586543716Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\"" Jul 11 07:44:21.595777 containerd[1563]: time="2025-07-11T07:44:21.595597106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 07:44:21.596772 containerd[1563]: time="2025-07-11T07:44:21.596407433Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:21.597541 containerd[1563]: time="2025-07-11T07:44:21.597436060Z" level=info msg="StartContainer for \"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\"" Jul 11 07:44:21.607617 containerd[1563]: time="2025-07-11T07:44:21.607539671Z" level=info msg="connecting to shim 50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:44:21.625297 systemd[1]: Started cri-containerd-af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd.scope - libcontainer container af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd. Jul 11 07:44:21.636915 containerd[1563]: time="2025-07-11T07:44:21.635354952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:21.648499 containerd[1563]: time="2025-07-11T07:44:21.648440312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 13.804205273s" Jul 11 07:44:21.648653 containerd[1563]: time="2025-07-11T07:44:21.648501810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 07:44:21.655078 containerd[1563]: time="2025-07-11T07:44:21.654753287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 07:44:21.657362 systemd[1]: Started cri-containerd-111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471.scope - libcontainer container 111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471. Jul 11 07:44:21.661199 containerd[1563]: time="2025-07-11T07:44:21.661119632Z" level=info msg="CreateContainer within sandbox \"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 07:44:21.693160 containerd[1563]: time="2025-07-11T07:44:21.692941265Z" level=info msg="Container c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:21.706258 systemd[1]: Started cri-containerd-50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6.scope - libcontainer container 50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6. Jul 11 07:44:21.719890 containerd[1563]: time="2025-07-11T07:44:21.719789108Z" level=info msg="CreateContainer within sandbox \"0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b\"" Jul 11 07:44:21.720987 containerd[1563]: time="2025-07-11T07:44:21.720795231Z" level=info msg="StartContainer for \"c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b\"" Jul 11 07:44:21.724273 containerd[1563]: time="2025-07-11T07:44:21.724242176Z" level=info msg="connecting to shim c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b" address="unix:///run/containerd/s/baf61fd270750b47150684526242e57b11a93ac48c4ce748255853074fd9b55c" protocol=ttrpc version=3 Jul 11 07:44:21.786652 systemd[1]: Started cri-containerd-c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b.scope - libcontainer container c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b. Jul 11 07:44:21.806863 containerd[1563]: time="2025-07-11T07:44:21.806726544Z" level=info msg="StartContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" returns successfully" Jul 11 07:44:21.839385 containerd[1563]: time="2025-07-11T07:44:21.839315501Z" level=info msg="StartContainer for \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" returns successfully" Jul 11 07:44:22.020154 containerd[1563]: time="2025-07-11T07:44:22.019475633Z" level=info msg="StartContainer for \"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" returns successfully" Jul 11 07:44:22.020154 containerd[1563]: time="2025-07-11T07:44:22.019624650Z" level=info msg="StartContainer for \"c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b\" returns successfully" Jul 11 07:44:22.767019 kubelet[2804]: I0711 07:44:22.764465 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b74959b8d-5c8n7" podStartSLOduration=14.913060735 podStartE2EDuration="1m55.764379101s" podCreationTimestamp="2025-07-11 07:42:27 +0000 UTC" firstStartedPulling="2025-07-11 07:42:40.802296503 +0000 UTC m=+72.035225579" lastFinishedPulling="2025-07-11 07:44:21.653614859 +0000 UTC m=+172.886543945" observedRunningTime="2025-07-11 07:44:22.739523123 +0000 UTC m=+173.972452199" watchObservedRunningTime="2025-07-11 07:44:22.764379101 +0000 UTC m=+173.997308197" Jul 11 07:44:24.457577 containerd[1563]: time="2025-07-11T07:44:24.457491336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:24.460616 containerd[1563]: time="2025-07-11T07:44:24.460565840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 07:44:24.461829 containerd[1563]: time="2025-07-11T07:44:24.461796463Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:24.468988 containerd[1563]: time="2025-07-11T07:44:24.468927116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 07:44:24.469842 containerd[1563]: time="2025-07-11T07:44:24.469797968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.814333735s" Jul 11 07:44:24.469842 containerd[1563]: time="2025-07-11T07:44:24.469839096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 07:44:24.475057 containerd[1563]: time="2025-07-11T07:44:24.474059752Z" level=info msg="CreateContainer within sandbox \"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 07:44:24.493360 containerd[1563]: time="2025-07-11T07:44:24.493301300Z" level=info msg="Container 7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:44:24.501525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093168902.mount: Deactivated successfully. Jul 11 07:44:24.519910 containerd[1563]: time="2025-07-11T07:44:24.519833649Z" level=info msg="CreateContainer within sandbox \"150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9\"" Jul 11 07:44:24.522769 containerd[1563]: time="2025-07-11T07:44:24.521171377Z" level=info msg="StartContainer for \"7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9\"" Jul 11 07:44:24.526002 containerd[1563]: time="2025-07-11T07:44:24.525315404Z" level=info msg="connecting to shim 7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9" address="unix:///run/containerd/s/1c7e8f39a26c2aee8519ff6185f6c165f435a0ff20ad8deec31c688f4f98ed45" protocol=ttrpc version=3 Jul 11 07:44:24.573436 systemd[1]: Started cri-containerd-7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9.scope - libcontainer container 7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9. Jul 11 07:44:24.682015 containerd[1563]: time="2025-07-11T07:44:24.681844378Z" level=info msg="StartContainer for \"7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9\" returns successfully" Jul 11 07:44:24.787468 kubelet[2804]: I0711 07:44:24.785478 2804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vlrrv" podStartSLOduration=49.940159783 podStartE2EDuration="2m32.785454926s" podCreationTimestamp="2025-07-11 07:41:52 +0000 UTC" firstStartedPulling="2025-07-11 07:42:41.626785398 +0000 UTC m=+72.859714474" lastFinishedPulling="2025-07-11 07:44:24.472080521 +0000 UTC m=+175.705009617" observedRunningTime="2025-07-11 07:44:24.784314256 +0000 UTC m=+176.017243352" watchObservedRunningTime="2025-07-11 07:44:24.785454926 +0000 UTC m=+176.018384002" Jul 11 07:44:25.524330 kubelet[2804]: I0711 07:44:25.524232 2804 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 07:44:25.524330 kubelet[2804]: I0711 07:44:25.524331 2804 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 07:44:26.374376 containerd[1563]: time="2025-07-11T07:44:26.374290588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"dc33d62d54dc7b67176056510b1b7eeaf151f3c73ebbafc2efb17699f3053cfc\" pid:5905 exited_at:{seconds:1752219866 nanos:373497797}" Jul 11 07:44:39.388556 containerd[1563]: time="2025-07-11T07:44:39.388484033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"9fbb41b41161777d1ce6a3245bb53b7834948d9cfc9f7c2cfcb7e2e25879fa1c\" pid:5932 exited_at:{seconds:1752219879 nanos:387325838}" Jul 11 07:44:43.273548 containerd[1563]: time="2025-07-11T07:44:43.273178765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"9aa377307c8e20b74cc7c2c7ad6fd2d1f8ddd33686993ef8aa907d3ac953b6a9\" pid:5973 exited_at:{seconds:1752219883 nanos:271912435}" Jul 11 07:44:43.362759 containerd[1563]: time="2025-07-11T07:44:43.362701373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"fde822a17344b104291b049f36719f3c88a9e0d7108b685df305dc26b05a8330\" pid:5986 exited_at:{seconds:1752219883 nanos:361599598}" Jul 11 07:44:48.046369 systemd[1]: Started sshd@7-172.24.4.223:22-172.24.4.1:43596.service - OpenSSH per-connection server daemon (172.24.4.1:43596). Jul 11 07:44:49.744405 sshd[6008]: Accepted publickey for core from 172.24.4.1 port 43596 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:44:49.766597 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:44:49.809907 systemd-logind[1532]: New session 10 of user core. Jul 11 07:44:49.815168 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 07:44:50.575056 sshd[6011]: Connection closed by 172.24.4.1 port 43596 Jul 11 07:44:50.578138 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Jul 11 07:44:50.583168 systemd[1]: sshd@7-172.24.4.223:22-172.24.4.1:43596.service: Deactivated successfully. Jul 11 07:44:50.588511 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 07:44:50.596337 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Jul 11 07:44:50.598397 systemd-logind[1532]: Removed session 10. Jul 11 07:45:03.723243 systemd[1]: Started sshd@8-172.24.4.223:22-172.24.4.1:52232.service - OpenSSH per-connection server daemon (172.24.4.1:52232). Jul 11 07:45:05.714177 systemd[1]: cri-containerd-af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd.scope: Deactivated successfully. Jul 11 07:45:05.714778 systemd[1]: cri-containerd-af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd.scope: Consumed 2.221s CPU time, 56.9M memory peak, 4M read from disk. Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:05.726310223Z" level=info msg="received exit event container_id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" pid:5761 exit_status:1 exited_at:{seconds:1752219905 nanos:723777634}" Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:05.729932444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" pid:5761 exit_status:1 exited_at:{seconds:1752219905 nanos:723777634}" Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:05.746421016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" pid:5789 exit_status:1 exited_at:{seconds:1752219905 nanos:740330568}" Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:05.746816499Z" level=info msg="received exit event container_id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" pid:5789 exit_status:1 exited_at:{seconds:1752219905 nanos:740330568}" Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:10.709723908Z" level=info msg="received exit event container_id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" pid:5773 exit_status:1 exited_at:{seconds:1752219910 nanos:709060155}" Jul 11 07:45:24.735923 containerd[1563]: time="2025-07-11T07:45:10.710400634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" pid:5773 exit_status:1 exited_at:{seconds:1752219910 nanos:709060155}" Jul 11 07:45:05.733039 systemd[1]: cri-containerd-50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6.scope: Deactivated successfully. Jul 11 07:45:05.733485 systemd[1]: cri-containerd-50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6.scope: Consumed 2.748s CPU time, 91.8M memory peak, 8M read from disk. Jul 11 07:45:10.705454 systemd[1]: cri-containerd-111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471.scope: Deactivated successfully. Jul 11 07:45:10.706469 systemd[1]: cri-containerd-111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471.scope: Consumed 1.294s CPU time, 20M memory peak, 2M read from disk. Jul 11 07:45:24.787916 containerd[1563]: time="2025-07-11T07:45:24.787754841Z" level=error msg="failed to handle container TaskExit event container_id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" pid:5761 exit_status:1 exited_at:{seconds:1752219905 nanos:723777634}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:45:24.788839 containerd[1563]: time="2025-07-11T07:45:24.788794203Z" level=error msg="failed to handle container TaskExit event container_id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" pid:5789 exit_status:1 exited_at:{seconds:1752219905 nanos:740330568}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:45:24.789804 containerd[1563]: time="2025-07-11T07:45:24.789567702Z" level=error msg="failed to handle container TaskExit event container_id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" pid:5773 exit_status:1 exited_at:{seconds:1752219910 nanos:709060155}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:45:24.812113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471-rootfs.mount: Deactivated successfully. Jul 11 07:45:24.812746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd-rootfs.mount: Deactivated successfully. Jul 11 07:45:24.823731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6-rootfs.mount: Deactivated successfully. Jul 11 07:45:25.079170 kubelet[2804]: E0711 07:45:25.079069 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.673s" Jul 11 07:45:25.111886 containerd[1563]: time="2025-07-11T07:45:25.111785098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"8507f97d56a40cbc495e8a99392e649da9d609d0812317e7f2b8cc5a15286bf4\" pid:6124 exit_status:1 exited_at:{seconds:1752219925 nanos:111069992}" Jul 11 07:45:25.194374 containerd[1563]: time="2025-07-11T07:45:25.194308197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"9b650fead36978926c329d40039eaf8edc15b557909a0f2465be1df99029fdb5\" pid:6070 exited_at:{seconds:1752219925 nanos:192365200}" Jul 11 07:45:25.224653 containerd[1563]: time="2025-07-11T07:45:25.224538484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"8c313a8cbbff7cc152fdd8978061aa6bf3283363ebdf4610f60a4824ff123e4b\" pid:6103 exited_at:{seconds:1752219925 nanos:221613455}" Jul 11 07:45:25.236212 containerd[1563]: time="2025-07-11T07:45:25.236136351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"8e815c477f2f981ab27ad4136ecc9f9e608f6fa88f309890b7139ee46d29ecab\" pid:6126 exited_at:{seconds:1752219925 nanos:235506034}" Jul 11 07:45:25.685421 containerd[1563]: time="2025-07-11T07:45:25.685188304Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 11 07:45:25.885366 containerd[1563]: time="2025-07-11T07:45:25.685416337Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 11 07:45:25.885366 containerd[1563]: time="2025-07-11T07:45:25.689169788Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 11 07:45:25.712743 systemd-logind[1532]: New session 11 of user core. Jul 11 07:45:25.696134 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:45:25.887593 sshd[6024]: Accepted publickey for core from 172.24.4.1 port 52232 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:45:25.721268 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 07:45:26.294785 containerd[1563]: time="2025-07-11T07:45:26.294690838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"774c3dc8bbc95f68236f216a7d0fb2a2cefb1487fb0f12a8c30ecd8cea1e4b0f\" pid:6172 exit_status:1 exited_at:{seconds:1752219926 nanos:294205467}" Jul 11 07:45:26.490230 containerd[1563]: time="2025-07-11T07:45:26.490133413Z" level=info msg="TaskExit event container_id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" id:\"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" pid:5761 exit_status:1 exited_at:{seconds:1752219905 nanos:723777634}" Jul 11 07:45:26.814031 sshd[6150]: Connection closed by 172.24.4.1 port 52232 Jul 11 07:45:26.813124 sshd-session[6024]: pam_unix(sshd:session): session closed for user core Jul 11 07:45:26.821016 systemd[1]: sshd@8-172.24.4.223:22-172.24.4.1:52232.service: Deactivated successfully. Jul 11 07:45:26.824315 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 07:45:26.829927 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Jul 11 07:45:26.832489 systemd-logind[1532]: Removed session 11. Jul 11 07:45:26.859338 containerd[1563]: time="2025-07-11T07:45:26.859098220Z" level=info msg="TaskExit event container_id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" id:\"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" pid:5789 exit_status:1 exited_at:{seconds:1752219905 nanos:740330568}" Jul 11 07:45:27.347602 containerd[1563]: time="2025-07-11T07:45:27.345068759Z" level=info msg="TaskExit event container_id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" id:\"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" pid:5773 exit_status:1 exited_at:{seconds:1752219910 nanos:709060155}" Jul 11 07:45:27.349150 kubelet[2804]: I0711 07:45:27.347557 2804 scope.go:117] "RemoveContainer" containerID="e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c" Jul 11 07:45:27.358711 kubelet[2804]: I0711 07:45:27.356794 2804 scope.go:117] "RemoveContainer" containerID="af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd" Jul 11 07:45:27.377939 containerd[1563]: time="2025-07-11T07:45:27.377725036Z" level=info msg="RemoveContainer for \"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\"" Jul 11 07:45:27.604343 containerd[1563]: time="2025-07-11T07:45:27.602291075Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jul 11 07:45:27.606591 containerd[1563]: time="2025-07-11T07:45:27.606487806Z" level=info msg="RemoveContainer for \"e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c\" returns successfully" Jul 11 07:45:27.715084 containerd[1563]: time="2025-07-11T07:45:27.713012987Z" level=info msg="Container 8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:45:27.840224 containerd[1563]: time="2025-07-11T07:45:27.840124541Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\"" Jul 11 07:45:27.842595 containerd[1563]: time="2025-07-11T07:45:27.842545844Z" level=info msg="StartContainer for \"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\"" Jul 11 07:45:27.846291 containerd[1563]: time="2025-07-11T07:45:27.846238219Z" level=info msg="connecting to shim 8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:45:27.880192 systemd[1]: Started cri-containerd-8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073.scope - libcontainer container 8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073. Jul 11 07:45:28.049766 containerd[1563]: time="2025-07-11T07:45:28.049690225Z" level=info msg="StartContainer for \"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" returns successfully" Jul 11 07:45:28.364290 kubelet[2804]: I0711 07:45:28.363184 2804 scope.go:117] "RemoveContainer" containerID="13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730" Jul 11 07:45:28.366581 kubelet[2804]: I0711 07:45:28.366529 2804 scope.go:117] "RemoveContainer" containerID="50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6" Jul 11 07:45:28.372113 kubelet[2804]: E0711 07:45:28.371131 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:45:28.406296 containerd[1563]: time="2025-07-11T07:45:28.406187142Z" level=info msg="RemoveContainer for \"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\"" Jul 11 07:45:28.418170 kubelet[2804]: I0711 07:45:28.418126 2804 scope.go:117] "RemoveContainer" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:45:28.419053 kubelet[2804]: E0711 07:45:28.418334 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:45:28.454317 containerd[1563]: time="2025-07-11T07:45:28.454131170Z" level=info msg="RemoveContainer for \"13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730\" returns successfully" Jul 11 07:45:28.455098 kubelet[2804]: I0711 07:45:28.454914 2804 scope.go:117] "RemoveContainer" containerID="8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2" Jul 11 07:45:28.460333 containerd[1563]: time="2025-07-11T07:45:28.460283008Z" level=info msg="RemoveContainer for \"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\"" Jul 11 07:45:28.602822 containerd[1563]: time="2025-07-11T07:45:28.602716125Z" level=info msg="RemoveContainer for \"8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2\" returns successfully" Jul 11 07:45:29.670087 kubelet[2804]: I0711 07:45:29.669011 2804 scope.go:117] "RemoveContainer" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:45:29.670087 kubelet[2804]: E0711 07:45:29.669554 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:45:31.228128 kubelet[2804]: I0711 07:45:31.227721 2804 scope.go:117] "RemoveContainer" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:45:31.234223 containerd[1563]: time="2025-07-11T07:45:31.234006665Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jul 11 07:45:31.919123 systemd[1]: Started sshd@9-172.24.4.223:22-172.24.4.1:53908.service - OpenSSH per-connection server daemon (172.24.4.1:53908). Jul 11 07:45:32.860885 containerd[1563]: time="2025-07-11T07:45:32.859553550Z" level=info msg="Container 64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:45:33.049040 containerd[1563]: time="2025-07-11T07:45:33.048891706Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\"" Jul 11 07:45:33.057181 containerd[1563]: time="2025-07-11T07:45:33.054949701Z" level=info msg="StartContainer for \"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\"" Jul 11 07:45:33.062351 containerd[1563]: time="2025-07-11T07:45:33.062232528Z" level=info msg="connecting to shim 64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:45:33.155299 systemd[1]: Started cri-containerd-64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2.scope - libcontainer container 64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2. Jul 11 07:45:33.338697 containerd[1563]: time="2025-07-11T07:45:33.338609065Z" level=info msg="StartContainer for \"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" returns successfully" Jul 11 07:45:33.739196 sshd[6258]: Accepted publickey for core from 172.24.4.1 port 53908 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:45:33.742955 sshd-session[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:45:33.761086 systemd-logind[1532]: New session 12 of user core. Jul 11 07:45:33.764170 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 07:45:34.521503 sshd[6296]: Connection closed by 172.24.4.1 port 53908 Jul 11 07:45:34.523545 sshd-session[6258]: pam_unix(sshd:session): session closed for user core Jul 11 07:45:34.537017 systemd[1]: sshd@9-172.24.4.223:22-172.24.4.1:53908.service: Deactivated successfully. Jul 11 07:45:34.552465 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 07:45:34.557084 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Jul 11 07:45:34.559532 systemd-logind[1532]: Removed session 12. Jul 11 07:45:39.345866 containerd[1563]: time="2025-07-11T07:45:39.344897102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"01efe5ee1809f7696f9ff5f12b15eadfba9221b8ce59598283697c826820c0b5\" pid:6324 exited_at:{seconds:1752219939 nanos:341721636}" Jul 11 07:45:39.544917 systemd[1]: Started sshd@10-172.24.4.223:22-172.24.4.1:46332.service - OpenSSH per-connection server daemon (172.24.4.1:46332). Jul 11 07:45:40.822125 sshd[6335]: Accepted publickey for core from 172.24.4.1 port 46332 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:45:40.824588 sshd-session[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:45:40.838093 systemd-logind[1532]: New session 13 of user core. Jul 11 07:45:40.849404 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 07:45:43.207410 kubelet[2804]: I0711 07:45:43.160128 2804 scope.go:117] "RemoveContainer" containerID="50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6" Jul 11 07:45:43.207410 kubelet[2804]: E0711 07:45:43.160516 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:45:44.259151 containerd[1563]: time="2025-07-11T07:45:44.259094520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"b4953cc76f10d4602f3228ba3f8cfc2fee41d1e48d43de4749c89b91e27546cd\" pid:6360 exited_at:{seconds:1752219944 nanos:214747710}" Jul 11 07:45:50.691437 containerd[1563]: time="2025-07-11T07:45:44.729368517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"e92b662ea366beef5ce84e74aee3b48cb40d86ebd0d5109468e13718b153603c\" pid:6369 exited_at:{seconds:1752219944 nanos:727413855}" Jul 11 07:45:55.154411 kubelet[2804]: I0711 07:45:55.154121 2804 scope.go:117] "RemoveContainer" containerID="50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6" Jul 11 07:45:55.160322 containerd[1563]: time="2025-07-11T07:45:55.160148330Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:4,}" Jul 11 07:45:57.987146 kubelet[2804]: E0711 07:45:57.986606 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal.18512298bf05de37 kube-system 1360 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:94dd2fdae141e91cb071209277979747,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:42:49 +0000 UTC,LastTimestamp:2025-07-11 07:45:50.864315297 +0000 UTC m=+262.097244373,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:45:58.346143 sshd-session[6335]: pam_unix(sshd:session): session closed for user core Jul 11 07:45:58.227543 systemd[1]: Started sshd@11-172.24.4.223:22-172.24.4.1:39448.service - OpenSSH per-connection server daemon (172.24.4.1:39448). Jul 11 07:45:58.486255 sshd[6338]: Connection closed by 172.24.4.1 port 46332 Jul 11 07:45:58.486560 kubelet[2804]: I0711 07:45:57.989137 2804 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfd148fd-d14a-4b38-a365-012b2396d789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.31.10\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-11T07:41:21Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\": etcdserver: request timed out" Jul 11 07:45:58.364263 systemd[1]: sshd@10-172.24.4.223:22-172.24.4.1:46332.service: Deactivated successfully. Jul 11 07:45:58.372215 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 07:45:58.375647 systemd[1]: session-13.scope: Consumed 3.636s CPU time, 15.1M memory peak. Jul 11 07:45:58.380710 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Jul 11 07:45:58.386836 systemd-logind[1532]: Removed session 13. Jul 11 07:45:58.870867 containerd[1563]: time="2025-07-11T07:45:58.870600824Z" level=info msg="Container 042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:45:59.079323 containerd[1563]: time="2025-07-11T07:45:59.078517957Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:4,} returns container id \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\"" Jul 11 07:45:59.084046 containerd[1563]: time="2025-07-11T07:45:59.083632611Z" level=info msg="StartContainer for \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\"" Jul 11 07:45:59.097386 containerd[1563]: time="2025-07-11T07:45:59.097300334Z" level=info msg="connecting to shim 042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:45:59.183432 systemd[1]: Started cri-containerd-042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec.scope - libcontainer container 042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec. Jul 11 07:45:59.263876 containerd[1563]: time="2025-07-11T07:45:59.263807604Z" level=info msg="StartContainer for \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" returns successfully" Jul 11 07:45:59.936050 sshd[6414]: Accepted publickey for core from 172.24.4.1 port 39448 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:45:59.940327 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:45:59.951056 systemd-logind[1532]: New session 14 of user core. Jul 11 07:45:59.957292 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 07:46:02.773576 sshd[6451]: Connection closed by 172.24.4.1 port 39448 Jul 11 07:46:02.775403 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Jul 11 07:46:02.786556 systemd[1]: sshd@11-172.24.4.223:22-172.24.4.1:39448.service: Deactivated successfully. Jul 11 07:46:02.798485 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 07:46:02.805036 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Jul 11 07:46:02.810190 systemd-logind[1532]: Removed session 14. Jul 11 07:46:08.589956 systemd[1]: Started sshd@12-172.24.4.223:22-172.24.4.1:38854.service - OpenSSH per-connection server daemon (172.24.4.1:38854). Jul 11 07:46:09.442112 containerd[1563]: time="2025-07-11T07:46:09.442006834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"a8a5c9b329dbef8944854bbb9c1440ea10391550a497466d6509f132c6cd1234\" pid:6486 exited_at:{seconds:1752219969 nanos:440658455}" Jul 11 07:46:13.220338 containerd[1563]: time="2025-07-11T07:46:13.220239101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"eb10c05ce005941a65d41e4483e46b8ec2cb03f9a9c477279941d57de649d901\" pid:6514 exited_at:{seconds:1752219973 nanos:219803619}" Jul 11 07:46:13.292662 containerd[1563]: time="2025-07-11T07:46:13.292576390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"9ce95fa9e6bab34afc8dcdc8aa9d1470214463ad15bea6e04bd46d112ce85154\" pid:6530 exited_at:{seconds:1752219973 nanos:291521325}" Jul 11 07:46:14.877466 containerd[1563]: time="2025-07-11T07:46:14.877399421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"56b351c85f1021f99c692fe1ab4bf34b69041a7cb69fbb99824c4cb3da9b676d\" pid:6556 exited_at:{seconds:1752219974 nanos:876752760}" Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.823935613Z" level=warning msg="container event discarded" container=a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.835396555Z" level=warning msg="container event discarded" container=a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.835461818Z" level=warning msg="container event discarded" container=79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.835486946Z" level=warning msg="container event discarded" container=79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.835513536Z" level=warning msg="container event discarded" container=47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.835536108Z" level=warning msg="container event discarded" container=47ef3cbdf150744ea59841c2e7711cebede6c21430cbff2b6e0881f5c989f4e7 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.918287030Z" level=warning msg="container event discarded" container=14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.918375046Z" level=warning msg="container event discarded" container=37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:20.935615676Z" level=warning msg="container event discarded" container=60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61 type=CONTAINER_CREATED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:21.117653045Z" level=warning msg="container event discarded" container=37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:21.117722567Z" level=warning msg="container event discarded" container=14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:21.151171364Z" level=warning msg="container event discarded" container=60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61 type=CONTAINER_STARTED_EVENT Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:24.466626772Z" level=info msg="received exit event container_id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" pid:6236 exit_status:1 exited_at:{seconds:1752219984 nanos:465639307}" Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:24.467888105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" pid:6236 exit_status:1 exited_at:{seconds:1752219984 nanos:465639307}" Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:26.353254392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"02dd81d6da443a8556199caec6e3dbc03e6bd75d06a20a7f6adc638b21adf01a\" pid:6591 exited_at:{seconds:1752219986 nanos:349587355}" Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:29.471697491Z" level=info msg="received exit event container_id:\"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" id:\"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" pid:6273 exit_status:1 exited_at:{seconds:1752219989 nanos:470211134}" Jul 11 07:46:32.550684 containerd[1563]: time="2025-07-11T07:46:29.472402332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" id:\"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" pid:6273 exit_status:1 exited_at:{seconds:1752219989 nanos:470211134}" Jul 11 07:46:24.461680 systemd[1]: cri-containerd-8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073.scope: Deactivated successfully. Jul 11 07:46:32.553354 kubelet[2804]: E0711 07:46:19.036155 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:46:32.553354 kubelet[2804]: E0711 07:46:28.782267 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:46:24.468012 systemd[1]: cri-containerd-8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073.scope: Consumed 2.811s CPU time, 49.6M memory peak, 492K read from disk. Jul 11 07:46:24.560099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073-rootfs.mount: Deactivated successfully. Jul 11 07:46:29.459560 systemd[1]: cri-containerd-64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2.scope: Deactivated successfully. Jul 11 07:46:29.460247 systemd[1]: cri-containerd-64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2.scope: Consumed 1.855s CPU time, 17.5M memory peak, 476K read from disk. Jul 11 07:46:32.611612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2-rootfs.mount: Deactivated successfully. Jul 11 07:46:33.438885 kubelet[2804]: I0711 07:46:33.438637 2804 status_manager.go:851] "Failed to get status for pod" podUID="94dd2fdae141e91cb071209277979747" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="etcdserver: request timed out" Jul 11 07:46:34.364159 containerd[1563]: time="2025-07-11T07:46:34.363787230Z" level=warning msg="container event discarded" container=c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f type=CONTAINER_CREATED_EVENT Jul 11 07:46:34.364768 containerd[1563]: time="2025-07-11T07:46:34.364143903Z" level=warning msg="container event discarded" container=c1c0a9fe6a87578bbc06bc2e3830e4fda4a79449930b32dd5f4be2b7e5e6909f type=CONTAINER_STARTED_EVENT Jul 11 07:46:34.423900 containerd[1563]: time="2025-07-11T07:46:34.423663385Z" level=warning msg="container event discarded" container=49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3 type=CONTAINER_CREATED_EVENT Jul 11 07:46:34.468507 containerd[1563]: time="2025-07-11T07:46:34.468380091Z" level=error msg="failed to handle container TaskExit event container_id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" pid:6236 exit_status:1 exited_at:{seconds:1752219984 nanos:465639307}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:46:34.487092 containerd[1563]: time="2025-07-11T07:46:34.486865541Z" level=warning msg="container event discarded" container=d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6 type=CONTAINER_CREATED_EVENT Jul 11 07:46:34.487092 containerd[1563]: time="2025-07-11T07:46:34.487070288Z" level=warning msg="container event discarded" container=d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6 type=CONTAINER_STARTED_EVENT Jul 11 07:46:34.533579 containerd[1563]: time="2025-07-11T07:46:34.533412161Z" level=warning msg="container event discarded" container=49ec2db2242e77ca47263cef205322186d77146e43bc24321ff23f04f1c660a3 type=CONTAINER_STARTED_EVENT Jul 11 07:46:35.489578 containerd[1563]: time="2025-07-11T07:46:35.489390625Z" level=info msg="TaskExit event container_id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" id:\"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" pid:6236 exit_status:1 exited_at:{seconds:1752219984 nanos:465639307}" Jul 11 07:46:35.795617 kubelet[2804]: E0711 07:46:35.795396 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:46:37.008802 containerd[1563]: time="2025-07-11T07:46:37.008574705Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 11 07:46:37.089178 containerd[1563]: time="2025-07-11T07:46:37.088956497Z" level=info msg="Ensure that container 8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073 in task-service has been cleanup successfully" Jul 11 07:46:37.189161 kubelet[2804]: E0711 07:46:37.188559 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:46:38.027316 containerd[1563]: time="2025-07-11T07:46:38.027162564Z" level=warning msg="container event discarded" container=2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584 type=CONTAINER_CREATED_EVENT Jul 11 07:46:38.148037 containerd[1563]: time="2025-07-11T07:46:38.147807977Z" level=warning msg="container event discarded" container=2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584 type=CONTAINER_STARTED_EVENT Jul 11 07:46:39.184054 kubelet[2804]: E0711 07:46:39.183803 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.033s" Jul 11 07:46:39.190767 kubelet[2804]: I0711 07:46:39.190648 2804 scope.go:117] "RemoveContainer" containerID="af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd" Jul 11 07:46:39.211777 containerd[1563]: time="2025-07-11T07:46:39.211581119Z" level=info msg="RemoveContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\"" Jul 11 07:46:39.214871 kubelet[2804]: I0711 07:46:39.214748 2804 scope.go:117] "RemoveContainer" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:46:39.216300 kubelet[2804]: I0711 07:46:39.216111 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:46:39.217527 kubelet[2804]: E0711 07:46:39.217412 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:46:39.234468 containerd[1563]: time="2025-07-11T07:46:39.233953169Z" level=info msg="RemoveContainer for \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\"" Jul 11 07:46:39.252872 kubelet[2804]: I0711 07:46:39.252777 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:46:39.253349 kubelet[2804]: E0711 07:46:39.253284 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:46:39.408635 containerd[1563]: time="2025-07-11T07:46:39.408277730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"dcc660fdf055afa03b1375924f1c201528e748dd9164a30933148197d6253beb\" pid:6631 exited_at:{seconds:1752219999 nanos:406689251}" Jul 11 07:46:40.274034 kubelet[2804]: I0711 07:46:40.273815 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:46:40.274913 kubelet[2804]: E0711 07:46:40.274822 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:46:41.290635 kubelet[2804]: I0711 07:46:41.290299 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:46:41.291524 kubelet[2804]: E0711 07:46:41.291433 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:46:42.209770 kubelet[2804]: I0711 07:46:42.207416 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:46:42.209770 kubelet[2804]: E0711 07:46:42.207660 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:52.490475828Z" level=warning msg="container event discarded" container=1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9 type=CONTAINER_CREATED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:52.490605973Z" level=warning msg="container event discarded" container=1a3435e26b90f4efd9d16bbb5c5e3d9c1aa866b41a1e5b22affa258f3dead7f9 type=CONTAINER_STARTED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:52.490619619Z" level=warning msg="container event discarded" container=42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396 type=CONTAINER_CREATED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:52.490632263Z" level=warning msg="container event discarded" container=42732e38ea0463427df7cc0bcf0d6bcffdbec384aee7b52463fe2fe8d1b29396 type=CONTAINER_STARTED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:52.804265108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"e2899a7925cd10a409aa2fe58c069d2ea7260d5f7d4b7855567b832cfda58fad\" pid:6672 exit_status:1 exited_at:{seconds:1752220012 nanos:613936023}" Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:53.554760333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"e1cc6e659257b10904351bda26e13ea8b4ebd82d74d9ca6b720108ea9daac769\" pid:6661 exited_at:{seconds:1752220013 nanos:502574178}" Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:56.991733821Z" level=warning msg="container event discarded" container=e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505 type=CONTAINER_CREATED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:57.185652592Z" level=warning msg="container event discarded" container=e7b9f086c8a7068f87135d829e88d3e76c3a032a2350c024d1a6e0416064a505 type=CONTAINER_STARTED_EVENT Jul 11 07:46:59.823951 containerd[1563]: time="2025-07-11T07:46:59.797014331Z" level=warning msg="container event discarded" container=fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9 type=CONTAINER_CREATED_EVENT Jul 11 07:46:59.812708 sshd-session[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:46:59.827345 sshd[6472]: Accepted publickey for core from 172.24.4.1 port 38854 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:46:59.827567 kubelet[2804]: I0711 07:46:42.756890 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:46:59.827567 kubelet[2804]: E0711 07:46:42.757155 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:46:59.827397 systemd-logind[1532]: New session 15 of user core. Jul 11 07:46:59.828385 kubelet[2804]: I0711 07:46:44.433536 2804 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfd148fd-d14a-4b38-a365-012b2396d789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.31.10\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-11T07:41:21Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\": etcdserver: request timed out" Jul 11 07:46:59.828666 kubelet[2804]: E0711 07:46:44.435298 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal.1851229d736ed3c1 kube-system 1297 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:1b42591ea292e73e5775e231f0503337,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:43:09 +0000 UTC,LastTimestamp:2025-07-11 07:46:29.667374685 +0000 UTC m=+300.900303841,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:46:59.828666 kubelet[2804]: E0711 07:46:47.684258 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Jul 11 07:46:59.828666 kubelet[2804]: E0711 07:46:47.789999 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.436s" Jul 11 07:46:59.828666 kubelet[2804]: E0711 07:46:56.615728 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.381s" Jul 11 07:46:59.828666 kubelet[2804]: I0711 07:46:56.616747 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:46:59.828897 kubelet[2804]: E0711 07:46:56.617169 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:46:59.828897 kubelet[2804]: E0711 07:46:57.919807 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jul 11 07:46:59.828897 kubelet[2804]: I0711 07:46:59.807960 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:46:59.828897 kubelet[2804]: E0711 07:46:59.808562 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:46:59.832229 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 07:47:00.180715 containerd[1563]: time="2025-07-11T07:47:00.180451000Z" level=warning msg="container event discarded" container=fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9 type=CONTAINER_STARTED_EVENT Jul 11 07:47:00.268546 containerd[1563]: time="2025-07-11T07:47:00.268344802Z" level=info msg="RemoveContainer for \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" returns successfully" Jul 11 07:47:00.269392 kubelet[2804]: I0711 07:47:00.269199 2804 scope.go:117] "RemoveContainer" containerID="af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd" Jul 11 07:47:00.275352 containerd[1563]: time="2025-07-11T07:47:00.275297177Z" level=info msg="RemoveContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\"" Jul 11 07:47:00.275931 containerd[1563]: time="2025-07-11T07:47:00.275501933Z" level=warning msg="get container info failed" error="container \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" in namespace \"k8s.io\": not found" Jul 11 07:47:00.275931 containerd[1563]: time="2025-07-11T07:47:00.275850842Z" level=info msg="RemoveContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" returns successfully" Jul 11 07:47:00.298145 containerd[1563]: time="2025-07-11T07:47:00.297820054Z" level=info msg="RemoveContainer for \"af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd\" returns successfully" Jul 11 07:47:00.300571 kubelet[2804]: I0711 07:47:00.300346 2804 scope.go:117] "RemoveContainer" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:47:00.301339 containerd[1563]: time="2025-07-11T07:47:00.301161517Z" level=error msg="ContainerStatus for \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\": not found" Jul 11 07:47:00.301665 kubelet[2804]: E0711 07:47:00.301616 2804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\": not found" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:47:00.301754 kubelet[2804]: E0711 07:47:00.301690 2804 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\": rpc error: code = NotFound desc = an error occurred when try to find container \"111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471\": not found" containerID="111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471" Jul 11 07:47:00.819509 sshd[6684]: Connection closed by 172.24.4.1 port 38854 Jul 11 07:47:00.823965 sshd-session[6472]: pam_unix(sshd:session): session closed for user core Jul 11 07:47:00.858270 systemd[1]: sshd@12-172.24.4.223:22-172.24.4.1:38854.service: Deactivated successfully. Jul 11 07:47:00.867210 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 07:47:00.871148 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Jul 11 07:47:00.876261 systemd[1]: Started sshd@13-172.24.4.223:22-172.24.4.1:50266.service - OpenSSH per-connection server daemon (172.24.4.1:50266). Jul 11 07:47:00.879768 systemd-logind[1532]: Removed session 15. Jul 11 07:47:01.281120 containerd[1563]: time="2025-07-11T07:47:01.280689483Z" level=warning msg="container event discarded" container=fcd58410a8d1f113839997a3c42f722e1fe531cac8626b9663d03d40d8e88ff9 type=CONTAINER_STOPPED_EVENT Jul 11 07:47:08.809583 kubelet[2804]: I0711 07:47:08.595932 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:47:08.809583 kubelet[2804]: E0711 07:47:08.596249 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:47:20.607225 kubelet[2804]: I0711 07:47:12.522142 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:47:20.607225 kubelet[2804]: E0711 07:47:16.094940 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal.185122a903a59c9b kube-system 1442 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:1b42591ea292e73e5775e231f0503337,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337),Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:43:59 +0000 UTC,LastTimestamp:2025-07-11 07:47:08.596198989 +0000 UTC m=+339.829128095,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:47:20.607225 kubelet[2804]: E0711 07:47:18.738444 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.587s" Jul 11 07:47:20.607225 kubelet[2804]: E0711 07:47:18.757489 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:09.063522928Z" level=warning msg="container event discarded" container=86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07 type=CONTAINER_CREATED_EVENT Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:09.238058102Z" level=warning msg="container event discarded" container=86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07 type=CONTAINER_STARTED_EVENT Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:12.945481074Z" level=warning msg="container event discarded" container=86f8ab1ac8e2225e3b67db054c6cca3044aa714bd441c09d74031f9fc1d71b07 type=CONTAINER_STOPPED_EVENT Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:13.009282003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"5185fe8574a208bdb9a97dfb22e35b24b0239350e2c734b914d8fb6c91a11678\" pid:6713 exited_at:{seconds:1752220033 nanos:8042285}" Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:13.426397950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"5a05e3a479c82d687cc7e538d3abeb4d0d94030504c0c5e5fe66094b616871cc\" pid:6758 exited_at:{seconds:1752220033 nanos:425473547}" Jul 11 07:47:20.608390 containerd[1563]: time="2025-07-11T07:47:14.889249218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"b255ffba462af993048cbd3f72c9bdec5850848f85334717c1488586033ba2bf\" pid:6784 exited_at:{seconds:1752220034 nanos:888495566}" Jul 11 07:47:20.640520 containerd[1563]: time="2025-07-11T07:47:20.639134514Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Jul 11 07:47:20.641695 containerd[1563]: time="2025-07-11T07:47:20.640942613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"673a8634eb8c6ebd9990ac9c7c9dd872be174776674d0af8e781fb06f39a60f0\" pid:6738 exited_at:{seconds:1752220040 nanos:626855632}" Jul 11 07:47:21.902859 sshd[6696]: Accepted publickey for core from 172.24.4.1 port 50266 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:47:21.909679 sshd-session[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:47:21.949422 systemd-logind[1532]: New session 16 of user core. Jul 11 07:47:21.967347 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 07:47:23.178076 kubelet[2804]: I0711 07:47:23.177553 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:47:23.185041 containerd[1563]: time="2025-07-11T07:47:23.184869866Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Jul 11 07:47:25.788831 kubelet[2804]: E0711 07:47:25.788733 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:47:26.419233 containerd[1563]: time="2025-07-11T07:47:26.419147437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"ecf881c21f5c5f635fef34985ff854b5e158b6b4a2734e3c043a78a45cfdb77a\" pid:6820 exit_status:1 exited_at:{seconds:1752220046 nanos:418262760}" Jul 11 07:47:26.518233 containerd[1563]: time="2025-07-11T07:47:26.517894740Z" level=warning msg="container event discarded" container=1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160 type=CONTAINER_CREATED_EVENT Jul 11 07:47:26.696617 containerd[1563]: time="2025-07-11T07:47:26.683770086Z" level=warning msg="container event discarded" container=1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160 type=CONTAINER_STARTED_EVENT Jul 11 07:47:28.546248 containerd[1563]: time="2025-07-11T07:47:28.546076493Z" level=warning msg="container event discarded" container=d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101 type=CONTAINER_CREATED_EVENT Jul 11 07:47:28.546248 containerd[1563]: time="2025-07-11T07:47:28.546240473Z" level=warning msg="container event discarded" container=d58d534fe3cca8d3a574a09cbece4c90c1b420dc439f347458b34bacbfa57101 type=CONTAINER_STARTED_EVENT Jul 11 07:47:28.715911 containerd[1563]: time="2025-07-11T07:47:28.715681993Z" level=warning msg="container event discarded" container=daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f type=CONTAINER_CREATED_EVENT Jul 11 07:47:28.982239 containerd[1563]: time="2025-07-11T07:47:28.981948713Z" level=warning msg="container event discarded" container=daa90dbcd291ae83a95004f5d737a8692466331904effcc61fea44e632e6ff4f type=CONTAINER_STARTED_EVENT Jul 11 07:47:29.006656 containerd[1563]: time="2025-07-11T07:47:29.006480412Z" level=warning msg="container event discarded" container=606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2 type=CONTAINER_CREATED_EVENT Jul 11 07:47:29.006656 containerd[1563]: time="2025-07-11T07:47:29.006624023Z" level=warning msg="container event discarded" container=606501d27095ed97f31bd00f956bc314e302ac7dbf008d380d302b0e88c2edf2 type=CONTAINER_STARTED_EVENT Jul 11 07:47:29.524999 kubelet[2804]: E0711 07:47:29.524802 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:47:31.762669 containerd[1563]: time="2025-07-11T07:47:31.762311771Z" level=info msg="Container 3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:47:32.170695 containerd[1563]: time="2025-07-11T07:47:32.170494005Z" level=info msg="Container 046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:47:32.368021 containerd[1563]: time="2025-07-11T07:47:32.367194460Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\"" Jul 11 07:47:32.374179 containerd[1563]: time="2025-07-11T07:47:32.374039835Z" level=info msg="StartContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\"" Jul 11 07:47:32.382361 containerd[1563]: time="2025-07-11T07:47:32.382262506Z" level=info msg="connecting to shim 3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:47:32.460211 systemd[1]: Started cri-containerd-3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24.scope - libcontainer container 3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24. Jul 11 07:47:36.644132 sshd[6794]: Connection closed by 172.24.4.1 port 50266 Jul 11 07:47:36.646934 sshd-session[6696]: pam_unix(sshd:session): session closed for user core Jul 11 07:47:36.670945 systemd[1]: Started sshd@14-172.24.4.223:22-172.24.4.1:58896.service - OpenSSH per-connection server daemon (172.24.4.1:58896). Jul 11 07:47:36.674367 systemd[1]: sshd@13-172.24.4.223:22-172.24.4.1:50266.service: Deactivated successfully. Jul 11 07:47:36.705827 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 07:47:36.709911 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Jul 11 07:47:36.718165 systemd-logind[1532]: Removed session 16. Jul 11 07:47:37.676643 containerd[1563]: time="2025-07-11T07:47:37.676346911Z" level=warning msg="container event discarded" container=90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f type=CONTAINER_CREATED_EVENT Jul 11 07:47:37.676643 containerd[1563]: time="2025-07-11T07:47:37.676578848Z" level=warning msg="container event discarded" container=90ce43a390fbf47a5ad8c29dc550703d6efa5db8bb02a882959e2619a3e3108f type=CONTAINER_STARTED_EVENT Jul 11 07:47:39.162561 containerd[1563]: time="2025-07-11T07:47:39.162231082Z" level=info msg="StartContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" returns successfully" Jul 11 07:47:39.165132 kubelet[2804]: E0711 07:47:39.163262 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.012s" Jul 11 07:47:39.402875 containerd[1563]: time="2025-07-11T07:47:39.402811800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"c9972b420bc29880d444f44d5f7f2bb089517544bcbe9de3f907766bc3ef262e\" pid:6896 exited_at:{seconds:1752220059 nanos:402211990}" Jul 11 07:47:39.655503 containerd[1563]: time="2025-07-11T07:47:39.655346872Z" level=warning msg="container event discarded" container=5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc type=CONTAINER_CREATED_EVENT Jul 11 07:47:39.655503 containerd[1563]: time="2025-07-11T07:47:39.655489400Z" level=warning msg="container event discarded" container=5f3915d5e5c080ad4afe0380a0ade6cfb4ef7b55aa0616ee1c502d7609b213cc type=CONTAINER_STARTED_EVENT Jul 11 07:47:40.624121 containerd[1563]: time="2025-07-11T07:47:40.623961787Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\"" Jul 11 07:47:40.627017 containerd[1563]: time="2025-07-11T07:47:40.626694227Z" level=info msg="StartContainer for \"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\"" Jul 11 07:47:40.633735 containerd[1563]: time="2025-07-11T07:47:40.633550831Z" level=info msg="connecting to shim 046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:47:40.682482 systemd[1]: Started cri-containerd-046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d.scope - libcontainer container 046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d. Jul 11 07:47:40.799711 containerd[1563]: time="2025-07-11T07:47:40.799618122Z" level=warning msg="container event discarded" container=0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058 type=CONTAINER_CREATED_EVENT Jul 11 07:47:40.799999 containerd[1563]: time="2025-07-11T07:47:40.799941071Z" level=warning msg="container event discarded" container=0668e2beec6a328ca7fbe8d1f88e6a39800d031874a18832293f9fdbe8519058 type=CONTAINER_STARTED_EVENT Jul 11 07:47:40.866842 sshd[6875]: Accepted publickey for core from 172.24.4.1 port 58896 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:47:40.872819 sshd-session[6875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:47:40.875556 containerd[1563]: time="2025-07-11T07:47:40.875430872Z" level=info msg="StartContainer for \"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\" returns successfully" Jul 11 07:47:40.889900 systemd-logind[1532]: New session 17 of user core. Jul 11 07:47:40.894150 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 07:47:40.895689 containerd[1563]: time="2025-07-11T07:47:40.895556617Z" level=warning msg="container event discarded" container=11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09 type=CONTAINER_CREATED_EVENT Jul 11 07:47:40.895773 containerd[1563]: time="2025-07-11T07:47:40.895671764Z" level=warning msg="container event discarded" container=11b446c0a2df364c5f01f7ccb9277f87624edeb4e40ccd8c2fcca54f11ae9e09 type=CONTAINER_STARTED_EVENT Jul 11 07:47:41.636021 containerd[1563]: time="2025-07-11T07:47:41.634626716Z" level=warning msg="container event discarded" container=150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5 type=CONTAINER_CREATED_EVENT Jul 11 07:47:41.636021 containerd[1563]: time="2025-07-11T07:47:41.634712628Z" level=warning msg="container event discarded" container=150d5beaa380ee1634d392df2496be3ba6a563275a3b8ce1010e76cbde8087c5 type=CONTAINER_STARTED_EVENT Jul 11 07:47:41.705127 containerd[1563]: time="2025-07-11T07:47:41.704881457Z" level=warning msg="container event discarded" container=214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e type=CONTAINER_CREATED_EVENT Jul 11 07:47:41.705127 containerd[1563]: time="2025-07-11T07:47:41.705097014Z" level=warning msg="container event discarded" container=214df3c9de1f7b579565281ae6408981e0922bbe3e8a8f61365da2599d33382e type=CONTAINER_STARTED_EVENT Jul 11 07:47:41.747437 containerd[1563]: time="2025-07-11T07:47:41.747355425Z" level=warning msg="container event discarded" container=b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa type=CONTAINER_CREATED_EVENT Jul 11 07:47:41.817067 containerd[1563]: time="2025-07-11T07:47:41.816950163Z" level=warning msg="container event discarded" container=b612a2acaa3fc3a8b6c4cdf9d43e85b54a94b2c68d15813aabc961c1789f52fa type=CONTAINER_STARTED_EVENT Jul 11 07:47:41.832949 sshd[6935]: Connection closed by 172.24.4.1 port 58896 Jul 11 07:47:41.833581 sshd-session[6875]: pam_unix(sshd:session): session closed for user core Jul 11 07:47:41.841451 systemd[1]: sshd@14-172.24.4.223:22-172.24.4.1:58896.service: Deactivated successfully. Jul 11 07:47:41.850827 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 07:47:41.855161 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Jul 11 07:47:41.859534 systemd-logind[1532]: Removed session 17. Jul 11 07:47:43.476368 containerd[1563]: time="2025-07-11T07:47:43.475618398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"1bc6b2863e592e27782d2517bba5dca50d02613715cecf380b39ff236431f397\" pid:6974 exited_at:{seconds:1752220063 nanos:473065566}" Jul 11 07:47:43.481125 containerd[1563]: time="2025-07-11T07:47:43.479739105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"25531bb7a2fe3caa2a03f6f92f5a91dc0ffc2849ab1421006c491174ffbeb209\" pid:6972 exited_at:{seconds:1752220063 nanos:479200730}" Jul 11 07:47:46.813899 systemd[1]: Started sshd@15-172.24.4.223:22-172.24.4.1:47170.service - OpenSSH per-connection server daemon (172.24.4.1:47170). Jul 11 07:47:48.228942 sshd[6998]: Accepted publickey for core from 172.24.4.1 port 47170 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:47:48.233196 sshd-session[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:47:48.250049 systemd-logind[1532]: New session 18 of user core. Jul 11 07:47:48.263236 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 07:47:49.155326 sshd[7001]: Connection closed by 172.24.4.1 port 47170 Jul 11 07:47:49.153742 sshd-session[6998]: pam_unix(sshd:session): session closed for user core Jul 11 07:47:49.164605 systemd[1]: sshd@15-172.24.4.223:22-172.24.4.1:47170.service: Deactivated successfully. Jul 11 07:47:49.164640 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Jul 11 07:47:49.174366 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 07:47:49.180069 systemd-logind[1532]: Removed session 18. Jul 11 07:47:54.195785 systemd[1]: Started sshd@16-172.24.4.223:22-172.24.4.1:35434.service - OpenSSH per-connection server daemon (172.24.4.1:35434). Jul 11 07:47:55.427474 sshd[7013]: Accepted publickey for core from 172.24.4.1 port 35434 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:47:55.432382 sshd-session[7013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:47:55.446528 systemd-logind[1532]: New session 19 of user core. Jul 11 07:47:55.455321 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 07:47:56.217212 sshd[7016]: Connection closed by 172.24.4.1 port 35434 Jul 11 07:47:56.219007 sshd-session[7013]: pam_unix(sshd:session): session closed for user core Jul 11 07:47:56.230467 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Jul 11 07:47:56.231754 systemd[1]: sshd@16-172.24.4.223:22-172.24.4.1:35434.service: Deactivated successfully. Jul 11 07:47:56.241931 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 07:47:56.246436 systemd-logind[1532]: Removed session 19. Jul 11 07:48:09.391725 containerd[1563]: time="2025-07-11T07:48:09.391499278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"27d026d4938705bbfe0d0ba0d3888baedb0ab2295c39e286489813d3d0b91a9f\" pid:7054 exited_at:{seconds:1752220089 nanos:390345784}" Jul 11 07:48:10.043757 kubelet[2804]: E0711 07:48:10.043523 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:48:10.628186 containerd[1563]: time="2025-07-11T07:48:10.627938830Z" level=warning msg="container event discarded" container=2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584 type=CONTAINER_STOPPED_EVENT Jul 11 07:48:10.629015 containerd[1563]: time="2025-07-11T07:48:10.628850067Z" level=warning msg="container event discarded" container=60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61 type=CONTAINER_STOPPED_EVENT Jul 11 07:48:10.629015 containerd[1563]: time="2025-07-11T07:48:10.628896865Z" level=warning msg="container event discarded" container=37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420 type=CONTAINER_STOPPED_EVENT Jul 11 07:48:10.991778 containerd[1563]: time="2025-07-11T07:48:10.991445713Z" level=warning msg="container event discarded" container=f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f type=CONTAINER_CREATED_EVENT Jul 11 07:48:11.031991 containerd[1563]: time="2025-07-11T07:48:11.031876465Z" level=warning msg="container event discarded" container=e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c type=CONTAINER_CREATED_EVENT Jul 11 07:48:11.060382 containerd[1563]: time="2025-07-11T07:48:11.060250539Z" level=warning msg="container event discarded" container=5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05 type=CONTAINER_CREATED_EVENT Jul 11 07:48:11.101666 systemd[1]: cri-containerd-3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24.scope: Deactivated successfully. Jul 11 07:48:11.103056 systemd[1]: cri-containerd-3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24.scope: Consumed 2.507s CPU time, 50.7M memory peak, 396K read from disk. Jul 11 07:48:11.112802 containerd[1563]: time="2025-07-11T07:48:11.112723761Z" level=info msg="received exit event container_id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" pid:6856 exit_status:1 exited_at:{seconds:1752220091 nanos:112229921}" Jul 11 07:48:11.113327 containerd[1563]: time="2025-07-11T07:48:11.113249853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" pid:6856 exit_status:1 exited_at:{seconds:1752220091 nanos:112229921}" Jul 11 07:48:11.120411 containerd[1563]: time="2025-07-11T07:48:11.120343148Z" level=warning msg="container event discarded" container=8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2 type=CONTAINER_CREATED_EVENT Jul 11 07:48:11.168448 systemd[1]: cri-containerd-042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec.scope: Deactivated successfully. Jul 11 07:48:11.169598 systemd[1]: cri-containerd-042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec.scope: Consumed 3.635s CPU time, 76.1M memory peak, 476K read from disk. Jul 11 07:48:11.173887 containerd[1563]: time="2025-07-11T07:48:11.173820353Z" level=info msg="received exit event container_id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" pid:6432 exit_status:1 exited_at:{seconds:1752220091 nanos:173250850}" Jul 11 07:48:11.174505 containerd[1563]: time="2025-07-11T07:48:11.174436033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" pid:6432 exit_status:1 exited_at:{seconds:1752220091 nanos:173250850}" Jul 11 07:48:11.390506 containerd[1563]: time="2025-07-11T07:48:11.390365748Z" level=warning msg="container event discarded" container=5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05 type=CONTAINER_STARTED_EVENT Jul 11 07:48:11.390506 containerd[1563]: time="2025-07-11T07:48:11.390439628Z" level=warning msg="container event discarded" container=e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c type=CONTAINER_STARTED_EVENT Jul 11 07:48:11.491953 containerd[1563]: time="2025-07-11T07:48:11.491804424Z" level=warning msg="container event discarded" container=8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2 type=CONTAINER_STARTED_EVENT Jul 11 07:48:11.491953 containerd[1563]: time="2025-07-11T07:48:11.491900375Z" level=warning msg="container event discarded" container=f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f type=CONTAINER_STARTED_EVENT Jul 11 07:48:13.235632 containerd[1563]: time="2025-07-11T07:48:13.235474211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"50ee96e478d0cce92bce88ea064872ab50037430c78b121eed3150bf77080ea4\" pid:7098 exit_status:1 exited_at:{seconds:1752220093 nanos:231163419}" Jul 11 07:48:13.317156 containerd[1563]: time="2025-07-11T07:48:13.317047232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"7b7282e0396c3729916f0f5db231b06a9c47baa948cfffc0c198299343cbf27f\" pid:7111 exited_at:{seconds:1752220093 nanos:315703560}" Jul 11 07:48:13.636166 systemd[1]: Started sshd@17-172.24.4.223:22-172.24.4.1:53106.service - OpenSSH per-connection server daemon (172.24.4.1:53106). Jul 11 07:48:14.855291 containerd[1563]: time="2025-07-11T07:48:14.855106208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"d50cc1acb62aba8554e5641e1d54f92b4df87ee8784fd26aeecb472b8cf217b7\" pid:7141 exited_at:{seconds:1752220094 nanos:854486631}" Jul 11 07:48:15.259213 kubelet[2804]: E0711 07:48:15.258676 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal.18512298bf05de37 kube-system 1374 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:94dd2fdae141e91cb071209277979747,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:42:49 +0000 UTC,LastTimestamp:2025-07-11 07:48:05.514897913 +0000 UTC m=+396.747827069,Count:44,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:48:17.052689 kubelet[2804]: E0711 07:48:17.052514 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:48:22.262376 kubelet[2804]: I0711 07:48:22.261739 2804 status_manager.go:851] "Failed to get status for pod" podUID="94dd2fdae141e91cb071209277979747" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="etcdserver: request timed out" Jul 11 07:48:22.335849 containerd[1563]: time="2025-07-11T07:48:22.335716887Z" level=error msg="failed to handle container TaskExit event container_id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" pid:6856 exit_status:1 exited_at:{seconds:1752220091 nanos:112229921}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:48:22.344298 containerd[1563]: time="2025-07-11T07:48:22.344168661Z" level=error msg="failed to handle container TaskExit event container_id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" pid:6432 exit_status:1 exited_at:{seconds:1752220091 nanos:173250850}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:48:22.357415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec-rootfs.mount: Deactivated successfully. Jul 11 07:48:22.366985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24-rootfs.mount: Deactivated successfully. Jul 11 07:48:22.627741 kubelet[2804]: E0711 07:48:22.627572 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:48:24.059216 containerd[1563]: time="2025-07-11T07:48:23.490459755Z" level=info msg="TaskExit event container_id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" id:\"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" pid:6856 exit_status:1 exited_at:{seconds:1752220091 nanos:112229921}" Jul 11 07:48:24.255091 containerd[1563]: time="2025-07-11T07:48:24.254835209Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Jul 11 07:48:24.255091 containerd[1563]: time="2025-07-11T07:48:24.254873090Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 11 07:48:24.402426 containerd[1563]: time="2025-07-11T07:48:24.402112288Z" level=info msg="TaskExit event container_id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" id:\"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" pid:6432 exit_status:1 exited_at:{seconds:1752220091 nanos:173250850}" Jul 11 07:48:24.547277 kubelet[2804]: I0711 07:48:24.547165 2804 scope.go:117] "RemoveContainer" containerID="8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073" Jul 11 07:48:24.548713 kubelet[2804]: I0711 07:48:24.548534 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:48:24.549081 kubelet[2804]: E0711 07:48:24.549035 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:48:24.554824 containerd[1563]: time="2025-07-11T07:48:24.554775074Z" level=info msg="RemoveContainer for \"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\"" Jul 11 07:48:24.844644 containerd[1563]: time="2025-07-11T07:48:24.844215186Z" level=info msg="RemoveContainer for \"8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073\" returns successfully" Jul 11 07:48:24.889515 sshd[7127]: Accepted publickey for core from 172.24.4.1 port 53106 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:48:24.896059 sshd-session[7127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:48:24.920139 systemd-logind[1532]: New session 20 of user core. Jul 11 07:48:24.935530 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 07:48:25.591560 kubelet[2804]: I0711 07:48:25.590681 2804 scope.go:117] "RemoveContainer" containerID="50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6" Jul 11 07:48:25.595380 kubelet[2804]: I0711 07:48:25.595314 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:48:25.598855 kubelet[2804]: E0711 07:48:25.598743 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:48:25.604892 containerd[1563]: time="2025-07-11T07:48:25.604220502Z" level=info msg="RemoveContainer for \"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\"" Jul 11 07:48:26.316523 containerd[1563]: time="2025-07-11T07:48:26.316439110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"25527d05c97a4d22b25a23dabce697e3656f3c77b9ffb891ea7869841f45e7e4\" pid:7197 exit_status:1 exited_at:{seconds:1752220106 nanos:314548197}" Jul 11 07:48:27.021353 containerd[1563]: time="2025-07-11T07:48:27.021011827Z" level=info msg="RemoveContainer for \"50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6\" returns successfully" Jul 11 07:48:31.624011 sshd[7177]: Connection closed by 172.24.4.1 port 53106 Jul 11 07:48:31.624763 sshd-session[7127]: pam_unix(sshd:session): session closed for user core Jul 11 07:48:31.641798 systemd[1]: sshd@17-172.24.4.223:22-172.24.4.1:53106.service: Deactivated successfully. Jul 11 07:48:31.652258 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 07:48:31.655290 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Jul 11 07:48:31.659791 systemd-logind[1532]: Removed session 20. Jul 11 07:48:32.188039 kubelet[2804]: I0711 07:48:32.187328 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:48:32.188039 kubelet[2804]: E0711 07:48:32.187783 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:48:33.114028 kubelet[2804]: I0711 07:48:33.112590 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:48:33.114028 kubelet[2804]: E0711 07:48:33.113438 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:48:35.996845 systemd[1]: Started sshd@18-172.24.4.223:22-172.24.4.1:34814.service - OpenSSH per-connection server daemon (172.24.4.1:34814). Jul 11 07:48:37.278482 sshd[7215]: Accepted publickey for core from 172.24.4.1 port 34814 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:48:37.286118 sshd-session[7215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:48:37.307703 systemd-logind[1532]: New session 21 of user core. Jul 11 07:48:37.322329 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 07:48:38.124821 sshd[7218]: Connection closed by 172.24.4.1 port 34814 Jul 11 07:48:38.125481 sshd-session[7215]: pam_unix(sshd:session): session closed for user core Jul 11 07:48:38.140178 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Jul 11 07:48:38.141861 systemd[1]: sshd@18-172.24.4.223:22-172.24.4.1:34814.service: Deactivated successfully. Jul 11 07:48:38.151457 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 07:48:38.159115 systemd-logind[1532]: Removed session 21. Jul 11 07:48:38.163320 kubelet[2804]: I0711 07:48:38.163255 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:48:38.164892 kubelet[2804]: E0711 07:48:38.164564 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:48:39.203681 containerd[1563]: time="2025-07-11T07:48:39.203461101Z" level=warning msg="container event discarded" container=5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05 type=CONTAINER_STOPPED_EVENT Jul 11 07:48:39.327905 containerd[1563]: time="2025-07-11T07:48:39.327822793Z" level=warning msg="container event discarded" container=f8edca220fbd312551d45f1fdfd82a4cb810b1e84c0ed32b920d0392c0035b5f type=CONTAINER_STOPPED_EVENT Jul 11 07:48:39.352846 containerd[1563]: time="2025-07-11T07:48:39.352375267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"ec1cd0e52587273dfa4114d784bfe3858b4df6a44b87a9690b8b1072214011f6\" pid:7243 exited_at:{seconds:1752220119 nanos:351803519}" Jul 11 07:48:43.156157 systemd[1]: Started sshd@19-172.24.4.223:22-172.24.4.1:34818.service - OpenSSH per-connection server daemon (172.24.4.1:34818). Jul 11 07:48:43.215580 containerd[1563]: time="2025-07-11T07:48:43.215512644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"868c87216032d2e9cb405184d4a027557bfc06978c3633a8aac323f39b3f5fea\" pid:7279 exited_at:{seconds:1752220123 nanos:213886971}" Jul 11 07:48:43.314375 containerd[1563]: time="2025-07-11T07:48:43.313854685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"7373df3091b72b81098a59ac0cdcb214e1ef85df2ff17d704b872918d284f03a\" pid:7289 exited_at:{seconds:1752220123 nanos:313047725}" Jul 11 07:48:43.403097 update_engine[1535]: I20250711 07:48:43.402318 1535 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 11 07:48:43.405625 update_engine[1535]: I20250711 07:48:43.403190 1535 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 11 07:48:43.405625 update_engine[1535]: I20250711 07:48:43.405267 1535 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 11 07:48:43.410545 update_engine[1535]: I20250711 07:48:43.408728 1535 omaha_request_params.cc:62] Current group set to developer Jul 11 07:48:43.416369 update_engine[1535]: I20250711 07:48:43.416098 1535 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 11 07:48:43.416608 update_engine[1535]: I20250711 07:48:43.416560 1535 update_attempter.cc:643] Scheduling an action processor start. Jul 11 07:48:43.417028 update_engine[1535]: I20250711 07:48:43.416921 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 07:48:43.448026 update_engine[1535]: I20250711 07:48:43.446547 1535 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 11 07:48:43.448026 update_engine[1535]: I20250711 07:48:43.446892 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 07:48:43.448026 update_engine[1535]: I20250711 07:48:43.446921 1535 omaha_request_action.cc:272] Request: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: Jul 11 07:48:43.448026 update_engine[1535]: I20250711 07:48:43.446964 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:48:43.456932 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 11 07:48:43.463157 update_engine[1535]: I20250711 07:48:43.462219 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:48:43.469408 update_engine[1535]: I20250711 07:48:43.468959 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:48:43.503271 update_engine[1535]: E20250711 07:48:43.503147 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:48:43.503415 update_engine[1535]: I20250711 07:48:43.503391 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 11 07:48:44.139266 containerd[1563]: time="2025-07-11T07:48:44.138820497Z" level=warning msg="container event discarded" container=2f402fa2d9a8931e1bc9923a84e95e8a47a2d94e916768bcfe1bdbb0e9f89584 type=CONTAINER_DELETED_EVENT Jul 11 07:48:44.424444 containerd[1563]: time="2025-07-11T07:48:44.423447937Z" level=warning msg="container event discarded" container=13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730 type=CONTAINER_CREATED_EVENT Jul 11 07:48:44.440100 containerd[1563]: time="2025-07-11T07:48:44.439924576Z" level=warning msg="container event discarded" container=1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e type=CONTAINER_CREATED_EVENT Jul 11 07:48:44.466471 sshd[7276]: Accepted publickey for core from 172.24.4.1 port 34818 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:48:44.475805 sshd-session[7276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:48:44.509058 systemd-logind[1532]: New session 22 of user core. Jul 11 07:48:44.524764 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 07:48:44.673677 containerd[1563]: time="2025-07-11T07:48:44.673470627Z" level=warning msg="container event discarded" container=b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba type=CONTAINER_CREATED_EVENT Jul 11 07:48:44.832814 containerd[1563]: time="2025-07-11T07:48:44.832592235Z" level=warning msg="container event discarded" container=13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730 type=CONTAINER_STARTED_EVENT Jul 11 07:48:44.878251 containerd[1563]: time="2025-07-11T07:48:44.878063976Z" level=warning msg="container event discarded" container=1e9ebb770f62c4441198b6414b397c7b5dc58627845a2a1097b219068f377b0e type=CONTAINER_STARTED_EVENT Jul 11 07:48:45.050132 containerd[1563]: time="2025-07-11T07:48:45.049541931Z" level=warning msg="container event discarded" container=b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba type=CONTAINER_STARTED_EVENT Jul 11 07:48:45.272317 sshd[7306]: Connection closed by 172.24.4.1 port 34818 Jul 11 07:48:45.273715 sshd-session[7276]: pam_unix(sshd:session): session closed for user core Jul 11 07:48:45.289864 systemd[1]: sshd@19-172.24.4.223:22-172.24.4.1:34818.service: Deactivated successfully. Jul 11 07:48:45.299749 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 07:48:45.306090 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Jul 11 07:48:45.309947 systemd-logind[1532]: Removed session 22. Jul 11 07:48:46.152832 kubelet[2804]: I0711 07:48:46.152501 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:48:46.155389 kubelet[2804]: E0711 07:48:46.154332 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:48:52.152069 kubelet[2804]: I0711 07:48:52.151818 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:48:52.153969 kubelet[2804]: E0711 07:48:52.153791 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:48:53.407023 update_engine[1535]: I20250711 07:48:53.406628 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:48:53.408396 update_engine[1535]: I20250711 07:48:53.408288 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:48:53.409747 update_engine[1535]: I20250711 07:48:53.409663 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:48:53.414857 update_engine[1535]: E20250711 07:48:53.414732 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:48:53.415225 update_engine[1535]: I20250711 07:48:53.415130 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 11 07:48:57.370401 containerd[1563]: time="2025-07-11T07:48:57.370119931Z" level=warning msg="container event discarded" container=7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714 type=CONTAINER_CREATED_EVENT Jul 11 07:49:03.980221 update_engine[1535]: I20250711 07:49:03.929901 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:49:03.980221 update_engine[1535]: I20250711 07:49:03.930408 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:49:03.980221 update_engine[1535]: I20250711 07:49:03.932366 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:49:03.980221 update_engine[1535]: E20250711 07:49:03.945339 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:49:03.980221 update_engine[1535]: I20250711 07:49:03.945501 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 11 07:48:57.452729 systemd[1]: Started sshd@20-172.24.4.223:22-172.24.4.1:34692.service - OpenSSH per-connection server daemon (172.24.4.1:34692). Jul 11 07:49:03.981287 kubelet[2804]: I0711 07:48:59.155249 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:49:03.981287 kubelet[2804]: E0711 07:48:59.155825 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:57.928190547Z" level=warning msg="container event discarded" container=7102e6a40318afca94def5b43390705d1bcf4296f790604bfce6348ffbf49714 type=CONTAINER_STARTED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:58.524257133Z" level=warning msg="container event discarded" container=13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730 type=CONTAINER_STOPPED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:58.524471837Z" level=warning msg="container event discarded" container=8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2 type=CONTAINER_STOPPED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:58.524494129Z" level=warning msg="container event discarded" container=e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c type=CONTAINER_STOPPED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:59.436435096Z" level=warning msg="container event discarded" container=60b6e9b17893f06d8c86eaee7503f9c1409178cffa66c32798cefca75a219d61 type=CONTAINER_DELETED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:59.474068596Z" level=warning msg="container event discarded" container=37d035f14b17150c4fe4356e65de8b7d83bea8dda00e3df8e3b4b6fe4d0d7420 type=CONTAINER_DELETED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:48:59.523655452Z" level=warning msg="container event discarded" container=5191f57ee52c3a44220039b321b42bd99bc5e8c409b845fe3622807f245dab05 type=CONTAINER_DELETED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:49:00.368449358Z" level=warning msg="container event discarded" container=8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6 type=CONTAINER_CREATED_EVENT Jul 11 07:49:03.982124 containerd[1563]: time="2025-07-11T07:49:00.596163732Z" level=warning msg="container event discarded" container=8ed7bdaf9732e5b0111269c6537efa3f26a9d4809548db88156178aca64cbcb6 type=CONTAINER_STARTED_EVENT Jul 11 07:49:04.283574 systemd[1]: cri-containerd-046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d.scope: Deactivated successfully. Jul 11 07:49:04.284748 systemd[1]: cri-containerd-046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d.scope: Consumed 2.239s CPU time, 19M memory peak, 436K read from disk. Jul 11 07:49:04.313068 containerd[1563]: time="2025-07-11T07:49:04.312447704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\" id:\"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\" pid:6920 exit_status:1 exited_at:{seconds:1752220144 nanos:305638277}" Jul 11 07:49:04.313068 containerd[1563]: time="2025-07-11T07:49:04.312861114Z" level=info msg="received exit event container_id:\"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\" id:\"046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d\" pid:6920 exit_status:1 exited_at:{seconds:1752220144 nanos:305638277}" Jul 11 07:49:04.360258 sshd[7319]: Accepted publickey for core from 172.24.4.1 port 34692 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:04.362158 sshd-session[7319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:04.370823 systemd-logind[1532]: New session 23 of user core. Jul 11 07:49:04.377230 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 07:49:04.408495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d-rootfs.mount: Deactivated successfully. Jul 11 07:49:04.624240 kubelet[2804]: I0711 07:49:04.624146 2804 scope.go:117] "RemoveContainer" containerID="64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2" Jul 11 07:49:04.625845 kubelet[2804]: I0711 07:49:04.625794 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:04.630337 kubelet[2804]: E0711 07:49:04.630239 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:04.637104 containerd[1563]: time="2025-07-11T07:49:04.636690889Z" level=info msg="RemoveContainer for \"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\"" Jul 11 07:49:04.697053 containerd[1563]: time="2025-07-11T07:49:04.696885276Z" level=info msg="RemoveContainer for \"64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2\" returns successfully" Jul 11 07:49:05.151893 kubelet[2804]: I0711 07:49:05.151793 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:49:05.153325 kubelet[2804]: E0711 07:49:05.152271 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:49:05.428242 containerd[1563]: time="2025-07-11T07:49:05.427939904Z" level=warning msg="container event discarded" container=d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77 type=CONTAINER_CREATED_EVENT Jul 11 07:49:05.609926 containerd[1563]: time="2025-07-11T07:49:05.609800677Z" level=warning msg="container event discarded" container=d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77 type=CONTAINER_STARTED_EVENT Jul 11 07:49:07.897433 containerd[1563]: time="2025-07-11T07:49:07.897170975Z" level=warning msg="container event discarded" container=cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f type=CONTAINER_CREATED_EVENT Jul 11 07:49:08.238382 containerd[1563]: time="2025-07-11T07:49:08.237880669Z" level=warning msg="container event discarded" container=cf6d695b65097d6028a33d59fe6799cf034050152ecc1b86fa301eba5dab308f type=CONTAINER_STARTED_EVENT Jul 11 07:49:09.566128 containerd[1563]: time="2025-07-11T07:49:09.565759517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"001c3fd1df87b452743e986162ac6c9ea69e8e72b9f7333cab006a11ac26ae74\" pid:7376 exited_at:{seconds:1752220149 nanos:558597105}" Jul 11 07:49:09.681866 kubelet[2804]: I0711 07:49:09.681769 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:14.101693 kubelet[2804]: E0711 07:49:09.682098 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:14.101693 kubelet[2804]: I0711 07:49:11.228346 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:14.101693 kubelet[2804]: E0711 07:49:11.228865 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:14.102425 containerd[1563]: time="2025-07-11T07:49:13.237309761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"37c22ebaed9a2311a9e91cedc49db4ab26372e51cfc80feaac9cc8a62fab3dcc\" pid:7414 exit_status:1 exited_at:{seconds:1752220153 nanos:236402512}" Jul 11 07:49:14.102425 containerd[1563]: time="2025-07-11T07:49:13.299715458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"a2d0e751e83242538483250b8396966b502d197aeede7bd92c27ba640b6d0e7d\" pid:7411 exited_at:{seconds:1752220153 nanos:298284012}" Jul 11 07:49:14.152546 kubelet[2804]: I0711 07:49:14.152436 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:49:14.153153 kubelet[2804]: E0711 07:49:14.152687 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:49:14.195755 sshd[7354]: Connection closed by 172.24.4.1 port 34692 Jul 11 07:49:14.197357 sshd-session[7319]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:14.207387 systemd[1]: sshd@20-172.24.4.223:22-172.24.4.1:34692.service: Deactivated successfully. Jul 11 07:49:14.214637 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 07:49:14.217679 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Jul 11 07:49:14.222289 systemd[1]: Started sshd@21-172.24.4.223:22-172.24.4.1:39038.service - OpenSSH per-connection server daemon (172.24.4.1:39038). Jul 11 07:49:14.227428 systemd-logind[1532]: Removed session 23. Jul 11 07:49:14.399687 update_engine[1535]: I20250711 07:49:14.399365 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:49:14.400441 update_engine[1535]: I20250711 07:49:14.400264 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:49:14.401513 update_engine[1535]: I20250711 07:49:14.400954 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:49:14.408254 update_engine[1535]: E20250711 07:49:14.408195 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:49:14.408387 update_engine[1535]: I20250711 07:49:14.408322 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 07:49:14.408387 update_engine[1535]: I20250711 07:49:14.408353 1535 omaha_request_action.cc:617] Omaha request response: Jul 11 07:49:14.408630 update_engine[1535]: E20250711 07:49:14.408579 1535 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 11 07:49:14.408947 update_engine[1535]: I20250711 07:49:14.408893 1535 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 11 07:49:14.408947 update_engine[1535]: I20250711 07:49:14.408916 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:49:14.408947 update_engine[1535]: I20250711 07:49:14.408931 1535 update_attempter.cc:306] Processing Done. Jul 11 07:49:14.409373 update_engine[1535]: E20250711 07:49:14.409018 1535 update_attempter.cc:619] Update failed. Jul 11 07:49:14.409373 update_engine[1535]: I20250711 07:49:14.409047 1535 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 11 07:49:14.409373 update_engine[1535]: I20250711 07:49:14.409061 1535 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 11 07:49:14.409373 update_engine[1535]: I20250711 07:49:14.409076 1535 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 11 07:49:14.409917 update_engine[1535]: I20250711 07:49:14.409538 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 07:49:14.409917 update_engine[1535]: I20250711 07:49:14.409732 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 07:49:14.409917 update_engine[1535]: I20250711 07:49:14.409777 1535 omaha_request_action.cc:272] Request: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: Jul 11 07:49:14.409917 update_engine[1535]: I20250711 07:49:14.409799 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 07:49:14.412833 update_engine[1535]: I20250711 07:49:14.410326 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 07:49:14.413453 update_engine[1535]: I20250711 07:49:14.413286 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 07:49:14.414725 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 11 07:49:14.418584 update_engine[1535]: E20250711 07:49:14.418450 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 07:49:14.418764 update_engine[1535]: I20250711 07:49:14.418674 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 11 07:49:14.418764 update_engine[1535]: I20250711 07:49:14.418715 1535 omaha_request_action.cc:617] Omaha request response: Jul 11 07:49:14.418764 update_engine[1535]: I20250711 07:49:14.418737 1535 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:49:14.418764 update_engine[1535]: I20250711 07:49:14.418753 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 11 07:49:14.419519 update_engine[1535]: I20250711 07:49:14.418771 1535 update_attempter.cc:306] Processing Done. Jul 11 07:49:14.419519 update_engine[1535]: I20250711 07:49:14.418790 1535 update_attempter.cc:310] Error event sent. Jul 11 07:49:14.419519 update_engine[1535]: I20250711 07:49:14.418850 1535 update_check_scheduler.cc:74] Next update check in 46m20s Jul 11 07:49:14.421429 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 11 07:49:14.905566 containerd[1563]: time="2025-07-11T07:49:14.905479946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"e7d5f485e833eb2bbe7f867456d617208984b2045b09e00fda42ceac1b2b75cf\" pid:7450 exited_at:{seconds:1752220154 nanos:903372577}" Jul 11 07:49:15.454483 sshd[7435]: Accepted publickey for core from 172.24.4.1 port 39038 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:15.458671 sshd-session[7435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:15.474959 systemd-logind[1532]: New session 24 of user core. Jul 11 07:49:15.487334 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 07:49:16.345339 sshd[7459]: Connection closed by 172.24.4.1 port 39038 Jul 11 07:49:16.346436 sshd-session[7435]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:16.353863 systemd[1]: sshd@21-172.24.4.223:22-172.24.4.1:39038.service: Deactivated successfully. Jul 11 07:49:16.358945 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 07:49:16.361352 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Jul 11 07:49:16.364701 systemd-logind[1532]: Removed session 24. Jul 11 07:49:20.151533 kubelet[2804]: I0711 07:49:20.151414 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:49:20.153362 kubelet[2804]: E0711 07:49:20.153166 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:49:21.376553 systemd[1]: Started sshd@22-172.24.4.223:22-172.24.4.1:39054.service - OpenSSH per-connection server daemon (172.24.4.1:39054). Jul 11 07:49:21.539753 containerd[1563]: time="2025-07-11T07:49:21.539510744Z" level=warning msg="container event discarded" container=af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd type=CONTAINER_CREATED_EVENT Jul 11 07:49:21.561169 containerd[1563]: time="2025-07-11T07:49:21.561075773Z" level=warning msg="container event discarded" container=111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471 type=CONTAINER_CREATED_EVENT Jul 11 07:49:21.573508 containerd[1563]: time="2025-07-11T07:49:21.573397943Z" level=warning msg="container event discarded" container=50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6 type=CONTAINER_CREATED_EVENT Jul 11 07:49:21.728128 containerd[1563]: time="2025-07-11T07:49:21.727829994Z" level=warning msg="container event discarded" container=c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b type=CONTAINER_CREATED_EVENT Jul 11 07:49:21.813605 containerd[1563]: time="2025-07-11T07:49:21.813449935Z" level=warning msg="container event discarded" container=af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd type=CONTAINER_STARTED_EVENT Jul 11 07:49:21.848696 containerd[1563]: time="2025-07-11T07:49:21.847934128Z" level=warning msg="container event discarded" container=111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471 type=CONTAINER_STARTED_EVENT Jul 11 07:49:22.028180 containerd[1563]: time="2025-07-11T07:49:22.027692312Z" level=warning msg="container event discarded" container=50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6 type=CONTAINER_STARTED_EVENT Jul 11 07:49:22.028180 containerd[1563]: time="2025-07-11T07:49:22.027843398Z" level=warning msg="container event discarded" container=c3a3fff024102c1a16f172a7e330ce73c6b87f88bf0a8c6b4efa5937b79b0c1b type=CONTAINER_STARTED_EVENT Jul 11 07:49:22.609597 sshd[7471]: Accepted publickey for core from 172.24.4.1 port 39054 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:22.616175 sshd-session[7471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:22.634756 systemd-logind[1532]: New session 25 of user core. Jul 11 07:49:22.645445 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 07:49:23.155340 kubelet[2804]: I0711 07:49:23.155168 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:23.156211 kubelet[2804]: E0711 07:49:23.155594 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:23.407887 sshd[7474]: Connection closed by 172.24.4.1 port 39054 Jul 11 07:49:23.409779 sshd-session[7471]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:23.421859 systemd[1]: sshd@22-172.24.4.223:22-172.24.4.1:39054.service: Deactivated successfully. Jul 11 07:49:23.428253 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 07:49:23.434846 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Jul 11 07:49:23.441412 systemd-logind[1532]: Removed session 25. Jul 11 07:49:24.528704 containerd[1563]: time="2025-07-11T07:49:24.528471629Z" level=warning msg="container event discarded" container=7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9 type=CONTAINER_CREATED_EVENT Jul 11 07:49:24.682089 containerd[1563]: time="2025-07-11T07:49:24.681991270Z" level=warning msg="container event discarded" container=7b5578a88a2cc266e1868a33f863c9344c6627d244129729b2cd42bc659751f9 type=CONTAINER_STARTED_EVENT Jul 11 07:49:26.282767 containerd[1563]: time="2025-07-11T07:49:26.282505556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"46f9089dfd04df9a95b541a35ed92a0fbe35d7a546beb9c95c7af803237e2743\" pid:7498 exited_at:{seconds:1752220166 nanos:282050480}" Jul 11 07:49:28.434570 systemd[1]: Started sshd@23-172.24.4.223:22-172.24.4.1:54956.service - OpenSSH per-connection server daemon (172.24.4.1:54956). Jul 11 07:49:29.154180 kubelet[2804]: I0711 07:49:29.153648 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:49:29.154180 kubelet[2804]: E0711 07:49:29.154047 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:49:29.583850 sshd[7508]: Accepted publickey for core from 172.24.4.1 port 54956 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:29.587130 sshd-session[7508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:29.601889 systemd-logind[1532]: New session 26 of user core. Jul 11 07:49:29.619386 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 07:49:30.474401 sshd[7513]: Connection closed by 172.24.4.1 port 54956 Jul 11 07:49:30.474943 sshd-session[7508]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:30.489583 systemd[1]: sshd@23-172.24.4.223:22-172.24.4.1:54956.service: Deactivated successfully. Jul 11 07:49:30.496589 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 07:49:30.508496 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Jul 11 07:49:30.512914 systemd-logind[1532]: Removed session 26. Jul 11 07:49:32.153035 kubelet[2804]: I0711 07:49:32.152124 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:49:32.162966 containerd[1563]: time="2025-07-11T07:49:32.162863860Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:5,}" Jul 11 07:49:32.196895 containerd[1563]: time="2025-07-11T07:49:32.196275832Z" level=info msg="Container 9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:49:32.217956 containerd[1563]: time="2025-07-11T07:49:32.217906352Z" level=info msg="CreateContainer within sandbox \"d4e63947bca21e2d084c4995faabd79384a595a67fcdfbf24c08f94329d27fb6\" for &ContainerMetadata{Name:tigera-operator,Attempt:5,} returns container id \"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\"" Jul 11 07:49:32.219310 containerd[1563]: time="2025-07-11T07:49:32.219023727Z" level=info msg="StartContainer for \"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\"" Jul 11 07:49:32.220876 containerd[1563]: time="2025-07-11T07:49:32.220837634Z" level=info msg="connecting to shim 9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" address="unix:///run/containerd/s/1d2a678b6bec198581cc6411f0a23f0c64cd0b683f63b8789592857e68a53eb2" protocol=ttrpc version=3 Jul 11 07:49:32.259291 systemd[1]: Started cri-containerd-9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e.scope - libcontainer container 9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e. Jul 11 07:49:32.307906 containerd[1563]: time="2025-07-11T07:49:32.307843179Z" level=info msg="StartContainer for \"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" returns successfully" Jul 11 07:49:35.497777 systemd[1]: Started sshd@24-172.24.4.223:22-172.24.4.1:34706.service - OpenSSH per-connection server daemon (172.24.4.1:34706). Jul 11 07:49:36.765296 sshd[7558]: Accepted publickey for core from 172.24.4.1 port 34706 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:36.768862 sshd-session[7558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:36.782207 systemd-logind[1532]: New session 27 of user core. Jul 11 07:49:36.788305 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 07:49:37.576041 sshd[7562]: Connection closed by 172.24.4.1 port 34706 Jul 11 07:49:37.575808 sshd-session[7558]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:37.586171 systemd[1]: sshd@24-172.24.4.223:22-172.24.4.1:34706.service: Deactivated successfully. Jul 11 07:49:37.592680 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 07:49:37.601461 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Jul 11 07:49:37.604576 systemd-logind[1532]: Removed session 27. Jul 11 07:49:38.151614 kubelet[2804]: I0711 07:49:38.151498 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:38.152569 kubelet[2804]: E0711 07:49:38.152456 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:39.377194 containerd[1563]: time="2025-07-11T07:49:39.376963332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"940ef51be88f45fb46cf66fb84b5a9a7edbd1dd8c13fba0135989d78fc006bb5\" pid:7585 exited_at:{seconds:1752220179 nanos:375595095}" Jul 11 07:49:42.152232 kubelet[2804]: I0711 07:49:42.152128 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:49:42.158333 containerd[1563]: time="2025-07-11T07:49:42.158246638Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Jul 11 07:49:42.182703 containerd[1563]: time="2025-07-11T07:49:42.181439280Z" level=info msg="Container 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:49:42.219055 containerd[1563]: time="2025-07-11T07:49:42.218856144Z" level=info msg="CreateContainer within sandbox \"79c49b98ecf271a131bb6bf12e4c7a943626dffb67cb25bbedca7b6b5a740ca5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\"" Jul 11 07:49:42.221036 containerd[1563]: time="2025-07-11T07:49:42.220861201Z" level=info msg="StartContainer for \"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\"" Jul 11 07:49:42.224547 containerd[1563]: time="2025-07-11T07:49:42.224438097Z" level=info msg="connecting to shim 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" address="unix:///run/containerd/s/ae96405862b11ec30ecf5d51a3df2c8e24b4f00f4b2ee133e08083c92e7d68c0" protocol=ttrpc version=3 Jul 11 07:49:42.285155 systemd[1]: Started cri-containerd-81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9.scope - libcontainer container 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9. Jul 11 07:49:42.368580 containerd[1563]: time="2025-07-11T07:49:42.368515832Z" level=info msg="StartContainer for \"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" returns successfully" Jul 11 07:49:42.599684 systemd[1]: Started sshd@25-172.24.4.223:22-172.24.4.1:34710.service - OpenSSH per-connection server daemon (172.24.4.1:34710). Jul 11 07:49:43.210322 containerd[1563]: time="2025-07-11T07:49:43.210270903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"e28d911b9c5e5568cf5b646930e58e43e6044c9cb4f6e2d33c8b42b8644a3a66\" pid:7644 exited_at:{seconds:1752220183 nanos:208328294}" Jul 11 07:49:43.301759 containerd[1563]: time="2025-07-11T07:49:43.301687814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"eff8c388fe4384cb9cfa9b199c92abe79935f146c8e582dc9ac0b3a0a4f1e888\" pid:7663 exited_at:{seconds:1752220183 nanos:301289253}" Jul 11 07:49:43.902092 sshd[7628]: Accepted publickey for core from 172.24.4.1 port 34710 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:43.906434 sshd-session[7628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:43.921131 systemd-logind[1532]: New session 28 of user core. Jul 11 07:49:43.929378 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 07:49:44.723911 sshd[7676]: Connection closed by 172.24.4.1 port 34710 Jul 11 07:49:44.724349 sshd-session[7628]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:44.729474 systemd[1]: sshd@25-172.24.4.223:22-172.24.4.1:34710.service: Deactivated successfully. Jul 11 07:49:44.729898 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Jul 11 07:49:44.734685 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 07:49:44.737683 systemd-logind[1532]: Removed session 28. Jul 11 07:49:49.773516 systemd[1]: Started sshd@26-172.24.4.223:22-172.24.4.1:60866.service - OpenSSH per-connection server daemon (172.24.4.1:60866). Jul 11 07:49:50.993129 sshd[7687]: Accepted publickey for core from 172.24.4.1 port 60866 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:50.996400 sshd-session[7687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:51.013602 systemd-logind[1532]: New session 29 of user core. Jul 11 07:49:51.023324 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 07:49:51.810615 sshd[7690]: Connection closed by 172.24.4.1 port 60866 Jul 11 07:49:51.814053 sshd-session[7687]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:51.829577 systemd[1]: sshd@26-172.24.4.223:22-172.24.4.1:60866.service: Deactivated successfully. Jul 11 07:49:51.838286 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 07:49:51.843391 systemd-logind[1532]: Session 29 logged out. Waiting for processes to exit. Jul 11 07:49:51.847714 systemd-logind[1532]: Removed session 29. Jul 11 07:49:53.152163 kubelet[2804]: I0711 07:49:53.151367 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:49:53.152163 kubelet[2804]: E0711 07:49:53.151674 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:49:56.842723 systemd[1]: Started sshd@27-172.24.4.223:22-172.24.4.1:39750.service - OpenSSH per-connection server daemon (172.24.4.1:39750). Jul 11 07:49:57.969535 sshd[7702]: Accepted publickey for core from 172.24.4.1 port 39750 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:49:57.971395 sshd-session[7702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:49:57.981704 systemd-logind[1532]: New session 30 of user core. Jul 11 07:49:57.984224 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 07:49:58.852042 sshd[7705]: Connection closed by 172.24.4.1 port 39750 Jul 11 07:49:58.852759 sshd-session[7702]: pam_unix(sshd:session): session closed for user core Jul 11 07:49:58.858586 systemd[1]: sshd@27-172.24.4.223:22-172.24.4.1:39750.service: Deactivated successfully. Jul 11 07:49:58.865305 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 07:49:58.869660 systemd-logind[1532]: Session 30 logged out. Waiting for processes to exit. Jul 11 07:49:58.871762 systemd-logind[1532]: Removed session 30. Jul 11 07:50:03.884873 systemd[1]: Started sshd@28-172.24.4.223:22-172.24.4.1:55992.service - OpenSSH per-connection server daemon (172.24.4.1:55992). Jul 11 07:50:05.391257 sshd[7722]: Accepted publickey for core from 172.24.4.1 port 55992 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:50:05.395800 sshd-session[7722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:50:05.413075 systemd-logind[1532]: New session 31 of user core. Jul 11 07:50:05.424639 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 07:50:06.296093 sshd[7727]: Connection closed by 172.24.4.1 port 55992 Jul 11 07:50:06.297037 sshd-session[7722]: pam_unix(sshd:session): session closed for user core Jul 11 07:50:06.311928 systemd[1]: sshd@28-172.24.4.223:22-172.24.4.1:55992.service: Deactivated successfully. Jul 11 07:50:06.318498 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 07:50:06.322104 systemd-logind[1532]: Session 31 logged out. Waiting for processes to exit. Jul 11 07:50:06.327646 systemd[1]: Started sshd@29-172.24.4.223:22-172.24.4.1:55994.service - OpenSSH per-connection server daemon (172.24.4.1:55994). Jul 11 07:50:06.329436 systemd-logind[1532]: Removed session 31. Jul 11 07:50:07.856927 sshd[7739]: Accepted publickey for core from 172.24.4.1 port 55994 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:50:07.858964 sshd-session[7739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:50:07.866920 systemd-logind[1532]: New session 32 of user core. Jul 11 07:50:07.873723 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 11 07:50:08.152249 kubelet[2804]: I0711 07:50:08.152108 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:50:08.152848 kubelet[2804]: E0711 07:50:08.152707 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(1b42591ea292e73e5775e231f0503337)\"" pod="kube-system/kube-scheduler-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="1b42591ea292e73e5775e231f0503337" Jul 11 07:50:09.794395 sshd[7742]: Connection closed by 172.24.4.1 port 55994 Jul 11 07:50:09.794189 sshd-session[7739]: pam_unix(sshd:session): session closed for user core Jul 11 07:50:09.824185 systemd[1]: sshd@29-172.24.4.223:22-172.24.4.1:55994.service: Deactivated successfully. Jul 11 07:50:09.835605 systemd[1]: session-32.scope: Deactivated successfully. Jul 11 07:50:09.848110 systemd-logind[1532]: Session 32 logged out. Waiting for processes to exit. Jul 11 07:50:09.850944 systemd[1]: Started sshd@30-172.24.4.223:22-172.24.4.1:56002.service - OpenSSH per-connection server daemon (172.24.4.1:56002). Jul 11 07:50:09.856156 systemd-logind[1532]: Removed session 32. Jul 11 07:50:10.262069 containerd[1563]: time="2025-07-11T07:50:10.261562455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"00abeba0852df221d3aeaffa67cf3fafabbdb0cb584b1a00ef291d79f65a2605\" pid:7766 exited_at:{seconds:1752220210 nanos:260634587}" Jul 11 07:50:11.047054 sshd[7762]: Accepted publickey for core from 172.24.4.1 port 56002 ssh2: RSA SHA256:DzHXAuzCvHtwRlA3Jrqr9sXSQOqRmwR5EdNx+SEbhoY Jul 11 07:50:11.050838 sshd-session[7762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 07:50:11.064164 systemd-logind[1532]: New session 33 of user core. Jul 11 07:50:11.071361 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 11 07:50:13.281922 containerd[1563]: time="2025-07-11T07:50:13.281564382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"32daf2446c018a19603ccb1255ffae060eb55e96efca1841bc9562097e3d4d01\" pid:7801 exited_at:{seconds:1752220213 nanos:277754456}" Jul 11 07:50:34.589825 systemd[1]: cri-containerd-81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9.scope: Deactivated successfully. Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:34.582550371Z" level=warning msg="container event discarded" container=af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd type=CONTAINER_STOPPED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:34.706377096Z" level=info msg="received exit event container_id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" pid:7609 exit_status:1 exited_at:{seconds:1752220234 nanos:660951025}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:34.744635098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" pid:7609 exit_status:1 exited_at:{seconds:1752220234 nanos:660951025}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:34.874017016Z" level=info msg="received exit event container_id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" pid:7537 exit_status:1 exited_at:{seconds:1752220234 nanos:873408289}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:34.874810430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" pid:7537 exit_status:1 exited_at:{seconds:1752220234 nanos:873408289}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:35.213761935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"6f163cac4690fbf0ccc671a8cda2f53227ebda4b0bd61d1d09327bbbca299e08\" pid:7865 exited_at:{seconds:1752220235 nanos:213085671}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:35.217410677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"a2071b5b444efd59e7ac18b10d934b4b8eabae85215a8b210545e5933840dd27\" pid:7862 exited_at:{seconds:1752220235 nanos:216403229}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953059778Z" level=warning msg="container event discarded" container=50c78372ca172a1f17d9a0523784826d69f5cc863238d38ad4f6cd087db5efd6 type=CONTAINER_STOPPED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953324887Z" level=warning msg="container event discarded" container=111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471 type=CONTAINER_STOPPED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953358812Z" level=warning msg="container event discarded" container=e61666cacb493492b7066cdf963c503e4ed9bf26262b70b4c151bd0700d6502c type=CONTAINER_DELETED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953378238Z" level=warning msg="container event discarded" container=8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073 type=CONTAINER_CREATED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953444402Z" level=warning msg="container event discarded" container=8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073 type=CONTAINER_STARTED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953461394Z" level=warning msg="container event discarded" container=13b504d26b6f0270296a9f2fcabd207ed619038c7625423a1f210459d1470730 type=CONTAINER_DELETED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953472976Z" level=warning msg="container event discarded" container=8e5e03f88c91ab39a9c8de372d6c2a154b1808764be77b0a3ca5992497f30cc2 type=CONTAINER_DELETED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953481934Z" level=warning msg="container event discarded" container=64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2 type=CONTAINER_CREATED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:41.953491993Z" level=warning msg="container event discarded" container=64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2 type=CONTAINER_STARTED_EVENT Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:44.708604036Z" level=error msg="failed to handle container TaskExit event container_id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" pid:7609 exit_status:1 exited_at:{seconds:1752220234 nanos:660951025}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:44.874959805Z" level=error msg="failed to handle container TaskExit event container_id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" pid:7537 exit_status:1 exited_at:{seconds:1752220234 nanos:873408289}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:46.490886847Z" level=info msg="TaskExit event container_id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" id:\"81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9\" pid:7609 exit_status:1 exited_at:{seconds:1752220234 nanos:660951025}" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:48.493416635Z" level=error msg="get state for 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" error="context deadline exceeded" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:48.493591304Z" level=warning msg="unknown status" status=0 Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:50.496956049Z" level=error msg="get state for 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" error="context deadline exceeded" Jul 11 07:50:51.376625 containerd[1563]: time="2025-07-11T07:50:50.497077327Z" level=warning msg="unknown status" status=0 Jul 11 07:50:51.380769 kubelet[2804]: I0711 07:50:34.838870 2804 scope.go:117] "RemoveContainer" containerID="046551ca6d380500fe02aa0a1ee4c78b6dc51bd4783efd4c0c3e3297a61b1e3d" Jul 11 07:50:34.591243 systemd[1]: cri-containerd-81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9.scope: Consumed 11.757s CPU time, 51M memory peak, 584K read from disk. Jul 11 07:50:34.866420 systemd[1]: cri-containerd-9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e.scope: Deactivated successfully. Jul 11 07:50:34.866885 systemd[1]: cri-containerd-9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e.scope: Consumed 3.463s CPU time, 86.1M memory peak, 368K read from disk. Jul 11 07:50:34.937333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9-rootfs.mount: Deactivated successfully. Jul 11 07:50:35.016701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e-rootfs.mount: Deactivated successfully. Jul 11 07:50:52.503491 containerd[1563]: time="2025-07-11T07:50:52.503208797Z" level=error msg="get state for 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" error="context deadline exceeded" Jul 11 07:50:52.506004 containerd[1563]: time="2025-07-11T07:50:52.504434175Z" level=warning msg="unknown status" status=0 Jul 11 07:50:52.669246 kubelet[2804]: E0711 07:50:52.669066 2804 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": context deadline exceeded" Jul 11 07:50:52.697230 kubelet[2804]: I0711 07:50:52.695739 2804 setters.go:600] "Node became not ready" node="ci-4392-0-0-n-cdb6f4f5a9.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T07:50:52Z","lastTransitionTime":"2025-07-11T07:50:52Z","reason":"KubeletNotReady","message":"container runtime is down"} Jul 11 07:50:54.976751 kubelet[2804]: E0711 07:50:52.845924 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.212s" Jul 11 07:50:54.979622 containerd[1563]: time="2025-07-11T07:50:53.020123490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"6d27f7a1bc06e45c1d0cf92ac6d396dd976ad7062901d6ce86150a8b7fa802f5\" pid:7981 exited_at:{seconds:1752220253 nanos:18844402}" Jul 11 07:50:54.987070 containerd[1563]: time="2025-07-11T07:50:54.982578177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"2d7a4c46636ab6d0813c2ebf91fed923df8427f97c052dcf04cd191db6d73570\" pid:7942 exit_status:1 exited_at:{seconds:1752220254 nanos:981690647}" Jul 11 07:50:54.987070 containerd[1563]: time="2025-07-11T07:50:54.986757940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"4bcdbbfa9aff80537d2258b80f3bff2e73c4b98815f0599fd755f5ba9d3c4231\" pid:7973 exit_status:1 exited_at:{seconds:1752220254 nanos:981713350}" Jul 11 07:50:55.005519 kubelet[2804]: E0711 07:50:55.005429 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.159s" Jul 11 07:50:55.094810 containerd[1563]: time="2025-07-11T07:50:55.094286889Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Jul 11 07:50:55.144073 containerd[1563]: time="2025-07-11T07:50:55.143025026Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 11 07:50:55.144500 containerd[1563]: time="2025-07-11T07:50:55.144330404Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Jul 11 07:50:55.144500 containerd[1563]: time="2025-07-11T07:50:55.144356543Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 11 07:50:55.144500 containerd[1563]: time="2025-07-11T07:50:55.144366774Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Jul 11 07:50:55.154524 containerd[1563]: time="2025-07-11T07:50:55.154468337Z" level=info msg="Ensure that container 81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9 in task-service has been cleanup successfully" Jul 11 07:50:55.168658 containerd[1563]: time="2025-07-11T07:50:55.167285415Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Jul 11 07:50:55.238075 containerd[1563]: time="2025-07-11T07:50:55.237538873Z" level=info msg="TaskExit event container_id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" id:\"9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e\" pid:7537 exit_status:1 exited_at:{seconds:1752220234 nanos:873408289}" Jul 11 07:50:55.243227 containerd[1563]: time="2025-07-11T07:50:55.243144290Z" level=info msg="Ensure that container 9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e in task-service has been cleanup successfully" Jul 11 07:50:55.312100 containerd[1563]: time="2025-07-11T07:50:55.311851075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"ef6144d1a46f7ebafbf022ef147275d54ed10228eb6180f6f2ea269ed9ae2b96\" pid:7929 exited_at:{seconds:1752220255 nanos:309948041}" Jul 11 07:50:59.080273 containerd[1563]: time="2025-07-11T07:50:59.080045424Z" level=warning msg="container event discarded" container=042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec type=CONTAINER_CREATED_EVENT Jul 11 07:50:59.272938 containerd[1563]: time="2025-07-11T07:50:59.272756415Z" level=warning msg="container event discarded" container=042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec type=CONTAINER_STARTED_EVENT Jul 11 07:50:59.589803 sshd[7779]: Connection closed by 172.24.4.1 port 56002 Jul 11 07:50:59.847223 kubelet[2804]: E0711 07:50:59.846818 2804 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:50:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:50:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:50:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-11T07:50:52Z\\\",\\\"message\\\":\\\"container runtime is down\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": etcdserver: request timed out" Jul 11 07:50:59.850585 kubelet[2804]: E0711 07:50:59.850175 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{ci-4392-0-0-n-cdb6f4f5a9.novalocal.18512309422b8de5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4392-0-0-n-cdb6f4f5a9.novalocal,UID:ci-4392-0-0-n-cdb6f4f5a9.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeNotReady,Message:Node ci-4392-0-0-n-cdb6f4f5a9.novalocal status is now: NodeNotReady,Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:50:52.695662053 +0000 UTC m=+563.928591139,LastTimestamp:2025-07-11 07:50:52.695662053 +0000 UTC m=+563.928591139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:50:59.855063 kubelet[2804]: E0711 07:50:59.854941 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:51:06.636139 sshd-session[7762]: pam_unix(sshd:session): session closed for user core Jul 11 07:51:06.664711 systemd[1]: sshd@30-172.24.4.223:22-172.24.4.1:56002.service: Deactivated successfully. Jul 11 07:51:06.678509 systemd[1]: session-33.scope: Deactivated successfully. Jul 11 07:51:06.686339 systemd-logind[1532]: Session 33 logged out. Waiting for processes to exit. Jul 11 07:51:06.692114 systemd-logind[1532]: Removed session 33. Jul 11 07:51:06.714271 kubelet[2804]: I0711 07:51:06.714155 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:51:06.731675 kubelet[2804]: E0711 07:51:06.731541 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.58s" Jul 11 07:51:06.743996 containerd[1563]: time="2025-07-11T07:51:06.743261041Z" level=info msg="RemoveContainer for \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\"" Jul 11 07:51:06.775003 kubelet[2804]: I0711 07:51:06.773636 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:51:06.777995 kubelet[2804]: I0711 07:51:06.776904 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:06.777995 kubelet[2804]: E0711 07:51:06.777278 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:51:06.839374 containerd[1563]: time="2025-07-11T07:51:06.839316540Z" level=info msg="RemoveContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\"" Jul 11 07:51:06.846595 kubelet[2804]: I0711 07:51:06.846344 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:51:06.846595 kubelet[2804]: E0711 07:51:06.846533 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:51:06.873883 kubelet[2804]: E0711 07:51:06.873695 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{calico-typha-b49cd5fd5-nms9w.185123094552bc33 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-typha-b49cd5fd5-nms9w,UID:6baa22a7-acb9-4e1c-9b85-77cbb0c26d6c,APIVersion:v1,ResourceVersion:651,FieldPath:spec.containers{calico-typha},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://localhost:9098/readiness\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:50:52.748561459 +0000 UTC m=+563.981490556,LastTimestamp:2025-07-11 07:50:52.748561459 +0000 UTC m=+563.981490556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:51:06.876711 kubelet[2804]: E0711 07:51:06.876649 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:51:07.285045 kubelet[2804]: E0711 07:51:07.283590 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:51:07.300614 containerd[1563]: time="2025-07-11T07:51:07.300489367Z" level=info msg="RemoveContainer for \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" returns successfully" Jul 11 07:51:07.302210 kubelet[2804]: I0711 07:51:07.302138 2804 scope.go:117] "RemoveContainer" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:51:07.335405 containerd[1563]: time="2025-07-11T07:51:07.335345631Z" level=info msg="RemoveContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\"" Jul 11 07:51:07.336118 containerd[1563]: time="2025-07-11T07:51:07.335987630Z" level=error msg="RemoveContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" failed" error="rpc error: code = Unknown desc = failed to set removing state for container \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\": container is already in removing state" Jul 11 07:51:07.336590 kubelet[2804]: E0711 07:51:07.336412 2804 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\": container is already in removing state" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:51:07.336590 kubelet[2804]: E0711 07:51:07.336491 2804 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to set removing state for container \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\": container is already in removing state" containerID="3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24" Jul 11 07:51:07.365254 containerd[1563]: time="2025-07-11T07:51:07.365189795Z" level=info msg="Container 388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b: CDI devices from CRI Config.CDIDevices: []" Jul 11 07:51:07.375809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417016921.mount: Deactivated successfully. Jul 11 07:51:12.187537 kubelet[2804]: I0711 07:51:12.187406 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:12.188368 kubelet[2804]: E0711 07:51:12.187815 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:51:15.055644 kubelet[2804]: I0711 07:51:12.886776 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:15.055644 kubelet[2804]: E0711 07:51:12.887304 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:51:15.055644 kubelet[2804]: E0711 07:51:14.267452 2804 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{calico-apiserver-667bcfd89f-qbsvk.185123094564efa4 calico-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-667bcfd89f-qbsvk,UID:4abaf656-f2e8-4404-bfd1-0657de6a798a,APIVersion:v1,ResourceVersion:816,FieldPath:spec.containers{calico-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.84.65:5443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:ci-4392-0-0-n-cdb6f4f5a9.novalocal,},FirstTimestamp:2025-07-11 07:50:52.749754276 +0000 UTC m=+563.982683382,LastTimestamp:2025-07-11 07:50:52.749754276 +0000 UTC m=+563.982683382,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4392-0-0-n-cdb6f4f5a9.novalocal,}" Jul 11 07:51:15.057602 containerd[1563]: time="2025-07-11T07:51:13.209334890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"84b320246bf899ef742e2422a0af74f411a8b31c5da7d5c8c23bafdb3d00d6f2\" pid:8053 exit_status:1 exited_at:{seconds:1752220273 nanos:208145990}" Jul 11 07:51:15.057602 containerd[1563]: time="2025-07-11T07:51:13.314754614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"b65797a7648065f62b9ae61a9f67fc82f7048aeca708e4e4eeb024c3c3017d2b\" pid:8072 exited_at:{seconds:1752220273 nanos:314244513}" Jul 11 07:51:15.057602 containerd[1563]: time="2025-07-11T07:51:14.887603484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"a3c2550f52aa25e193d404fa0d191608bdc6c55028ea748563c0c68cc42e6ee7\" pid:8098 exited_at:{seconds:1752220274 nanos:887091129}" Jul 11 07:51:15.058394 kubelet[2804]: I0711 07:51:14.334671 2804 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfd148fd-d14a-4b38-a365-012b2396d789\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:51:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-11T07:51:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://14963706cc7c5d7ec8ffc5cc6a78be725808167c9818a86b00e45691b93faf95\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.31.10\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-11T07:41:21Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-ci-4392-0-0-n-cdb6f4f5a9.novalocal\": etcdserver: request timed out" Jul 11 07:51:15.058605 kubelet[2804]: E0711 07:51:14.348902 2804 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:51:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:51:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:51:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-11T07:51:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-11T07:51:07Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": etcdserver: request timed out" Jul 11 07:51:15.722395 containerd[1563]: time="2025-07-11T07:51:15.722203202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"648ca2ea9655def25a9f5aa95851495a610ce5f015cf7b389a9337c93345a386\" pid:8029 exited_at:{seconds:1752220275 nanos:720649105}" Jul 11 07:51:18.363967 kubelet[2804]: E0711 07:51:18.362768 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": context deadline exceeded" interval="200ms" Jul 11 07:51:18.419820 kubelet[2804]: I0711 07:51:18.379807 2804 scope.go:117] "RemoveContainer" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:51:18.419820 kubelet[2804]: E0711 07:51:18.384287 2804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\": not found" containerID="042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec" Jul 11 07:51:18.419820 kubelet[2804]: I0711 07:51:18.384367 2804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec"} err="failed to get container status \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\": not found" Jul 11 07:51:18.420375 containerd[1563]: time="2025-07-11T07:51:18.375413448Z" level=info msg="RemoveContainer for \"3b5dd1bd7198feb2e8e41a2bc00002919f9e17ea032219333df96bab3ab55b24\" returns successfully" Jul 11 07:51:18.420375 containerd[1563]: time="2025-07-11T07:51:18.383529092Z" level=error msg="ContainerStatus for \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"042c5b03e6aecc28dcc2131a112b0727b0fd841238f94b64f7b97eb2508074ec\": not found" Jul 11 07:51:18.687366 containerd[1563]: time="2025-07-11T07:51:18.686897757Z" level=info msg="CreateContainer within sandbox \"a4c473d87f8442f19bdae94bef811290169de76c1ad853d95101f86534125f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b\"" Jul 11 07:51:18.689388 containerd[1563]: time="2025-07-11T07:51:18.689320459Z" level=info msg="StartContainer for \"388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b\"" Jul 11 07:51:18.693735 containerd[1563]: time="2025-07-11T07:51:18.693553942Z" level=info msg="connecting to shim 388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b" address="unix:///run/containerd/s/7fe16dd4310d91485b4c30a99a68643dca480a6dc08544d1724182e5168bb324" protocol=ttrpc version=3 Jul 11 07:51:18.753394 systemd[1]: Started cri-containerd-388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b.scope - libcontainer container 388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b. Jul 11 07:51:19.642156 kubelet[2804]: I0711 07:51:19.151800 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:51:19.642156 kubelet[2804]: E0711 07:51:19.152282 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:51:19.717306 containerd[1563]: time="2025-07-11T07:51:19.717052724Z" level=info msg="StartContainer for \"388375ee67f9b488ebd37961c05e1d4390b61ca10bf05dcbe571a9deb8760c3b\" returns successfully" Jul 11 07:51:24.153260 kubelet[2804]: I0711 07:51:24.151573 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:24.153260 kubelet[2804]: E0711 07:51:24.152037 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:51:36.230653 kubelet[2804]: I0711 07:51:33.010999 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:51:36.230653 kubelet[2804]: E0711 07:51:33.034562 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:51:36.357555 kubelet[2804]: E0711 07:51:36.357212 2804 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.206s" Jul 11 07:51:36.359889 kubelet[2804]: I0711 07:51:36.359847 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:36.361414 kubelet[2804]: E0711 07:51:36.361231 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:51:39.197240 containerd[1563]: time="2025-07-11T07:51:39.197039203Z" level=warning msg="container event discarded" container=8cec014991175b2564ee1c2f918842761c90a0a04c96b98e530901bad24ec073 type=CONTAINER_STOPPED_EVENT Jul 11 07:51:39.199291 containerd[1563]: time="2025-07-11T07:51:39.197544385Z" level=warning msg="container event discarded" container=64c4bc961d0cfd62ed14d63cf877d997e5c51536d1a80a4874837906a5328aa2 type=CONTAINER_STOPPED_EVENT Jul 11 07:51:40.003986 kubelet[2804]: E0711 07:51:40.002689 2804 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 11 07:51:41.164374 containerd[1563]: time="2025-07-11T07:51:41.164035733Z" level=error msg="get state for 1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160" error="context deadline exceeded" Jul 11 07:51:41.165520 containerd[1563]: time="2025-07-11T07:51:41.165466067Z" level=warning msg="unknown status" status=0 Jul 11 07:51:44.689804 containerd[1563]: time="2025-07-11T07:51:44.688412896Z" level=error msg="ttrpc: received message on inactive stream" stream=705 Jul 11 07:51:44.716162 containerd[1563]: time="2025-07-11T07:51:44.715249206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"d7a95b5371c6c0d34b183b67f8fde0c53fe923e88717080447bc900983f882a0\" pid:8152 exited_at:{seconds:1752220304 nanos:692638875}" Jul 11 07:51:44.769395 containerd[1563]: time="2025-07-11T07:51:44.768324440Z" level=error msg="ExecSync for \"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 11 07:51:44.774178 kubelet[2804]: E0711 07:51:44.771965 2804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba" cmd=["/usr/bin/check-status","-l"] Jul 11 07:51:44.785427 kubelet[2804]: E0711 07:51:44.784383 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:51:44.999807 containerd[1563]: time="2025-07-11T07:51:44.999408299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"10c603c3f98d32f658c1d85e06b3b64c375c11802293f6999db0a5a8a81b5a14\" pid:8179 exit_status:1 exited_at:{seconds:1752220304 nanos:998692400}" Jul 11 07:51:45.102723 containerd[1563]: time="2025-07-11T07:51:45.102646591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"da4a4cbfbd7304fc2ba2e2a0e80c2e870f0814cff1f2e006edfb73aafbf15dae\" pid:8246 exit_status:1 exited_at:{seconds:1752220305 nanos:102329765}" Jul 11 07:51:45.129822 containerd[1563]: time="2025-07-11T07:51:45.129682597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"e06f3c23c507a6ac80af820fa0a58777546bcc9c07a4bfccb009b2d5d82ea247\" pid:8213 exited_at:{seconds:1752220305 nanos:129131960}" Jul 11 07:51:46.118697 containerd[1563]: time="2025-07-11T07:51:46.118538257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"7274b3f34cd04a744d1bf145e7c59c58d6b5f8461c2cad6abe33534329d31c88\" pid:8204 exited_at:{seconds:1752220306 nanos:117052309}" Jul 11 07:51:46.152029 kubelet[2804]: I0711 07:51:46.151823 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:51:46.153306 kubelet[2804]: E0711 07:51:46.153183 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:51:51.152902 kubelet[2804]: I0711 07:51:51.152807 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:51:51.336419 kubelet[2804]: E0711 07:51:51.154336 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:52:00.279682 containerd[1563]: time="2025-07-11T07:52:00.278660454Z" level=warning msg="container event discarded" container=111fad46105ad7b3f0e7f678122f669953a13fe50200655ba171f2d7ce594471 type=CONTAINER_DELETED_EVENT Jul 11 07:52:00.308684 containerd[1563]: time="2025-07-11T07:52:00.308578055Z" level=warning msg="container event discarded" container=af366797c818e6d203806cdfe9b6c2585f6443cbbd20974bf6e1f7d7377eeccd type=CONTAINER_DELETED_EVENT Jul 11 07:52:01.151528 kubelet[2804]: I0711 07:52:01.151430 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:52:01.153320 kubelet[2804]: E0711 07:52:01.152589 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:52:03.152095 kubelet[2804]: I0711 07:52:03.151735 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:52:19.013653 kubelet[2804]: E0711 07:52:03.153171 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:52:19.013653 kubelet[2804]: E0711 07:52:16.365858 2804 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4392-0-0-n-cdb6f4f5a9.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 11 07:52:19.184570 containerd[1563]: time="2025-07-11T07:52:18.117088700Z" level=error msg="post event" error="context deadline exceeded" Jul 11 07:52:19.209638 kubelet[2804]: I0711 07:52:19.207681 2804 scope.go:117] "RemoveContainer" containerID="9574a6e24a786c6c060c869e1a13800552963f21845e3d6ffee6223d0bcc678e" Jul 11 07:52:19.216850 containerd[1563]: time="2025-07-11T07:52:19.216267652Z" level=error msg="ttrpc: received message on inactive stream" stream=171 Jul 11 07:52:19.237341 kubelet[2804]: E0711 07:52:19.235965 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-mplsp_tigera-operator(b888df97-3c70-41ba-a3f5-7ac75508eb3b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-mplsp" podUID="b888df97-3c70-41ba-a3f5-7ac75508eb3b" Jul 11 07:52:19.246630 kubelet[2804]: I0711 07:52:19.246421 2804 scope.go:117] "RemoveContainer" containerID="81e6fe53d567b5a6df92704ad046bb728b4bf70f9b61d854be308956cf3608b9" Jul 11 07:52:19.253770 kubelet[2804]: E0711 07:52:19.252219 2804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal_kube-system(9d801a80cb49e408d2efc270d30c5fd8)\"" pod="kube-system/kube-controller-manager-ci-4392-0-0-n-cdb6f4f5a9.novalocal" podUID="9d801a80cb49e408d2efc270d30c5fd8" Jul 11 07:52:19.487827 containerd[1563]: time="2025-07-11T07:52:19.487747171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58c7e895e32ae1c9f51feb3ae3069f4a699dc320d42c4d0768de5c151fc2eba\" id:\"a9123150b0f35a904717be91a758ca34082d47d960b0ff5c14fa2402b3c4f821\" pid:8323 exit_status:1 exited_at:{seconds:1752220339 nanos:486684620}" Jul 11 07:52:19.513341 kubelet[2804]: E0711 07:52:19.513278 2804 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4392-0-0-n-cdb6f4f5a9.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 11 07:52:19.597349 containerd[1563]: time="2025-07-11T07:52:19.597263005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"9fa9137b0242c1bd987ab6c38b1efdc41aa22cf85af6d84b932752eb6041d69f\" pid:8305 exited_at:{seconds:1752220339 nanos:595573022}" Jul 11 07:52:19.633388 containerd[1563]: time="2025-07-11T07:52:19.633253914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d135ea7e591d6b935ef0a73ae98ed4a9586f812a7b8429ba5f3b219880feba77\" id:\"526856d0dff69ceb0d2114e142b15054fe4728cfd9cb98eed87b9f7b7fad5c94\" pid:8331 exited_at:{seconds:1752220339 nanos:632310447}" Jul 11 07:52:19.641171 containerd[1563]: time="2025-07-11T07:52:19.641109718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1773bde37599c354ee60ac56d6986400caab55b316c61da9d70847984a866160\" id:\"44c638647561eaa7d5e62edc843d37bf14882620f6eec134a4f6d44c8f514240\" pid:8346 exited_at:{seconds:1752220339 nanos:640557689}"