Jul 10 08:05:48.965575 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 03:48:39 -00 2025 Jul 10 08:05:48.965600 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 08:05:48.965612 kernel: BIOS-provided physical RAM map: Jul 10 08:05:48.965622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 08:05:48.965630 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 08:05:48.965638 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 08:05:48.965647 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 10 08:05:48.965656 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 10 08:05:48.965664 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 08:05:48.965672 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 08:05:48.965680 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 10 08:05:48.965688 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 08:05:48.965698 kernel: NX (Execute Disable) protection: active Jul 10 08:05:48.965707 kernel: APIC: Static calls initialized Jul 10 08:05:48.965716 kernel: SMBIOS 3.0.0 present. Jul 10 08:05:48.965725 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 10 08:05:48.965734 kernel: DMI: Memory slots populated: 1/1 Jul 10 08:05:48.965744 kernel: Hypervisor detected: KVM Jul 10 08:05:48.965753 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 08:05:48.965761 kernel: kvm-clock: using sched offset of 4882052054 cycles Jul 10 08:05:48.965771 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 08:05:48.965780 kernel: tsc: Detected 1996.249 MHz processor Jul 10 08:05:48.965789 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 08:05:48.965798 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 08:05:48.965807 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 10 08:05:48.965817 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 10 08:05:48.965828 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 08:05:48.965837 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 10 08:05:48.965846 kernel: ACPI: Early table checksum verification disabled Jul 10 08:05:48.965854 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 10 08:05:48.965863 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 08:05:48.965872 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 08:05:48.965881 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 08:05:48.965890 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 10 08:05:48.965899 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 08:05:48.965909 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 08:05:48.965918 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 10 08:05:48.965927 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 10 08:05:48.965936 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 10 08:05:48.965974 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 10 08:05:48.965990 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 10 08:05:48.965999 kernel: No NUMA configuration found Jul 10 08:05:48.966009 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 10 08:05:48.966018 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 10 08:05:48.966026 kernel: Zone ranges: Jul 10 08:05:48.966035 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 08:05:48.966044 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 08:05:48.966052 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 10 08:05:48.966061 kernel: Device empty Jul 10 08:05:48.966069 kernel: Movable zone start for each node Jul 10 08:05:48.966080 kernel: Early memory node ranges Jul 10 08:05:48.966088 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 08:05:48.966097 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 10 08:05:48.966106 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 10 08:05:48.966114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 10 08:05:48.966123 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 08:05:48.966132 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 08:05:48.966140 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 10 08:05:48.966149 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 08:05:48.966159 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 08:05:48.966168 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 08:05:48.966177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 08:05:48.966186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 08:05:48.966194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 08:05:48.966203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 08:05:48.966212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 08:05:48.966220 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 08:05:48.966229 kernel: CPU topo: Max. logical packages: 2 Jul 10 08:05:48.966240 kernel: CPU topo: Max. logical dies: 2 Jul 10 08:05:48.966248 kernel: CPU topo: Max. dies per package: 1 Jul 10 08:05:48.966257 kernel: CPU topo: Max. threads per core: 1 Jul 10 08:05:48.966265 kernel: CPU topo: Num. cores per package: 1 Jul 10 08:05:48.966274 kernel: CPU topo: Num. threads per package: 1 Jul 10 08:05:48.966282 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 08:05:48.966291 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 08:05:48.966300 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 10 08:05:48.966308 kernel: Booting paravirtualized kernel on KVM Jul 10 08:05:48.966319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 08:05:48.966327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 08:05:48.966336 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 08:05:48.966345 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 08:05:48.966353 kernel: pcpu-alloc: [0] 0 1 Jul 10 08:05:48.966362 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 10 08:05:48.966372 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 08:05:48.966381 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 08:05:48.966392 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 08:05:48.966401 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 08:05:48.966409 kernel: Fallback order for Node 0: 0 Jul 10 08:05:48.966418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 10 08:05:48.966426 kernel: Policy zone: Normal Jul 10 08:05:48.966435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 08:05:48.966443 kernel: software IO TLB: area num 2. Jul 10 08:05:48.966452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 08:05:48.966461 kernel: ftrace: allocating 40097 entries in 157 pages Jul 10 08:05:48.966471 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 08:05:48.966480 kernel: Dynamic Preempt: voluntary Jul 10 08:05:48.966488 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 08:05:48.966498 kernel: rcu: RCU event tracing is enabled. Jul 10 08:05:48.966507 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 08:05:48.966515 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 08:05:48.966524 kernel: Rude variant of Tasks RCU enabled. Jul 10 08:05:48.966533 kernel: Tracing variant of Tasks RCU enabled. Jul 10 08:05:48.966541 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 08:05:48.966552 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 08:05:48.966561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 08:05:48.966570 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 08:05:48.966578 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 08:05:48.966587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 10 08:05:48.966596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 08:05:48.966604 kernel: Console: colour VGA+ 80x25 Jul 10 08:05:48.966613 kernel: printk: legacy console [tty0] enabled Jul 10 08:05:48.966621 kernel: printk: legacy console [ttyS0] enabled Jul 10 08:05:48.966632 kernel: ACPI: Core revision 20240827 Jul 10 08:05:48.966640 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 08:05:48.966649 kernel: x2apic enabled Jul 10 08:05:48.966657 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 08:05:48.966666 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 08:05:48.966675 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 10 08:05:48.966690 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 10 08:05:48.966701 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 08:05:48.966710 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 08:05:48.966719 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 08:05:48.966728 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 08:05:48.966737 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 08:05:48.966748 kernel: Speculative Store Bypass: Vulnerable Jul 10 08:05:48.966757 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 10 08:05:48.966766 kernel: Freeing SMP alternatives memory: 32K Jul 10 08:05:48.966775 kernel: pid_max: default: 32768 minimum: 301 Jul 10 08:05:48.966784 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 08:05:48.966795 kernel: landlock: Up and running. Jul 10 08:05:48.966804 kernel: SELinux: Initializing. Jul 10 08:05:48.966813 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 08:05:48.966822 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 08:05:48.966831 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 10 08:05:48.966841 kernel: Performance Events: AMD PMU driver. Jul 10 08:05:48.966850 kernel: ... version: 0 Jul 10 08:05:48.966859 kernel: ... bit width: 48 Jul 10 08:05:48.966867 kernel: ... generic registers: 4 Jul 10 08:05:48.966879 kernel: ... value mask: 0000ffffffffffff Jul 10 08:05:48.966888 kernel: ... max period: 00007fffffffffff Jul 10 08:05:48.966897 kernel: ... fixed-purpose events: 0 Jul 10 08:05:48.966906 kernel: ... event mask: 000000000000000f Jul 10 08:05:48.966914 kernel: signal: max sigframe size: 1440 Jul 10 08:05:48.966923 kernel: rcu: Hierarchical SRCU implementation. Jul 10 08:05:48.966932 kernel: rcu: Max phase no-delay instances is 400. Jul 10 08:05:48.966942 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 08:05:48.966967 kernel: smp: Bringing up secondary CPUs ... Jul 10 08:05:48.967005 kernel: smpboot: x86: Booting SMP configuration: Jul 10 08:05:48.967015 kernel: .... node #0, CPUs: #1 Jul 10 08:05:48.967024 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 08:05:48.967033 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 10 08:05:48.967043 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54600K init, 2368K bss, 227296K reserved, 0K cma-reserved) Jul 10 08:05:48.967052 kernel: devtmpfs: initialized Jul 10 08:05:48.967061 kernel: x86/mm: Memory block size: 128MB Jul 10 08:05:48.967070 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 08:05:48.967079 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 08:05:48.967090 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 08:05:48.967099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 08:05:48.967108 kernel: audit: initializing netlink subsys (disabled) Jul 10 08:05:48.967118 kernel: audit: type=2000 audit(1752134745.014:1): state=initialized audit_enabled=0 res=1 Jul 10 08:05:48.967126 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 08:05:48.967135 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 08:05:48.967144 kernel: cpuidle: using governor menu Jul 10 08:05:48.967153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 08:05:48.967162 kernel: dca service started, version 1.12.1 Jul 10 08:05:48.967175 kernel: PCI: Using configuration type 1 for base access Jul 10 08:05:48.967184 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 08:05:48.967193 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 08:05:48.967202 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 08:05:48.967211 kernel: ACPI: Added _OSI(Module Device) Jul 10 08:05:48.967220 kernel: ACPI: Added _OSI(Processor Device) Jul 10 08:05:48.967229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 08:05:48.967238 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 08:05:48.967247 kernel: ACPI: Interpreter enabled Jul 10 08:05:48.967258 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 08:05:48.967267 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 08:05:48.967276 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 08:05:48.967285 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 08:05:48.967294 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 10 08:05:48.967303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 08:05:48.967437 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 10 08:05:48.967526 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 10 08:05:48.967614 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 10 08:05:48.967628 kernel: acpiphp: Slot [3] registered Jul 10 08:05:48.967637 kernel: acpiphp: Slot [4] registered Jul 10 08:05:48.967646 kernel: acpiphp: Slot [5] registered Jul 10 08:05:48.967655 kernel: acpiphp: Slot [6] registered Jul 10 08:05:48.967664 kernel: acpiphp: Slot [7] registered Jul 10 08:05:48.967672 kernel: acpiphp: Slot [8] registered Jul 10 08:05:48.967681 kernel: acpiphp: Slot [9] registered Jul 10 08:05:48.967690 kernel: acpiphp: Slot [10] registered Jul 10 08:05:48.967701 kernel: acpiphp: Slot [11] registered Jul 10 08:05:48.967710 kernel: acpiphp: Slot [12] registered Jul 10 08:05:48.967719 kernel: acpiphp: Slot [13] registered Jul 10 08:05:48.967728 kernel: acpiphp: Slot [14] registered Jul 10 08:05:48.967737 kernel: acpiphp: Slot [15] registered Jul 10 08:05:48.967746 kernel: acpiphp: Slot [16] registered Jul 10 08:05:48.967755 kernel: acpiphp: Slot [17] registered Jul 10 08:05:48.967763 kernel: acpiphp: Slot [18] registered Jul 10 08:05:48.967772 kernel: acpiphp: Slot [19] registered Jul 10 08:05:48.967783 kernel: acpiphp: Slot [20] registered Jul 10 08:05:48.967791 kernel: acpiphp: Slot [21] registered Jul 10 08:05:48.967800 kernel: acpiphp: Slot [22] registered Jul 10 08:05:48.967809 kernel: acpiphp: Slot [23] registered Jul 10 08:05:48.967818 kernel: acpiphp: Slot [24] registered Jul 10 08:05:48.967827 kernel: acpiphp: Slot [25] registered Jul 10 08:05:48.967836 kernel: acpiphp: Slot [26] registered Jul 10 08:05:48.967844 kernel: acpiphp: Slot [27] registered Jul 10 08:05:48.967853 kernel: acpiphp: Slot [28] registered Jul 10 08:05:48.967862 kernel: acpiphp: Slot [29] registered Jul 10 08:05:48.967873 kernel: acpiphp: Slot [30] registered Jul 10 08:05:48.967881 kernel: acpiphp: Slot [31] registered Jul 10 08:05:48.967890 kernel: PCI host bridge to bus 0000:00 Jul 10 08:05:48.968002 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 08:05:48.968082 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 08:05:48.968157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 08:05:48.968231 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 08:05:48.968310 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 10 08:05:48.968383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 08:05:48.968502 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 10 08:05:48.968608 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 10 08:05:48.968714 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 10 08:05:48.968807 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 10 08:05:48.968902 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 10 08:05:48.972050 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 10 08:05:48.972152 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 10 08:05:48.972239 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 10 08:05:48.972336 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 10 08:05:48.972444 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 10 08:05:48.972536 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 10 08:05:48.972648 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 10 08:05:48.972743 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 10 08:05:48.972837 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 10 08:05:48.972930 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 10 08:05:48.973079 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 10 08:05:48.973174 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 08:05:48.973276 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 08:05:48.973378 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 10 08:05:48.973471 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 10 08:05:48.973563 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 10 08:05:48.973655 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 10 08:05:48.973761 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 08:05:48.973857 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 10 08:05:48.975013 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 10 08:05:48.975115 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 10 08:05:48.975214 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 08:05:48.975305 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 10 08:05:48.975393 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 10 08:05:48.975489 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 08:05:48.975579 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 10 08:05:48.975672 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 10 08:05:48.975759 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 10 08:05:48.975772 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 08:05:48.975782 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 08:05:48.975791 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 08:05:48.975800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 08:05:48.975810 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 10 08:05:48.975819 kernel: iommu: Default domain type: Translated Jul 10 08:05:48.975828 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 08:05:48.975841 kernel: PCI: Using ACPI for IRQ routing Jul 10 08:05:48.975850 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 08:05:48.975859 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 08:05:48.975868 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 10 08:05:48.976462 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 10 08:05:48.976561 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 10 08:05:48.976648 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 08:05:48.976661 kernel: vgaarb: loaded Jul 10 08:05:48.976674 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 08:05:48.976684 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 08:05:48.976693 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 08:05:48.976702 kernel: pnp: PnP ACPI init Jul 10 08:05:48.976799 kernel: pnp 00:03: [dma 2] Jul 10 08:05:48.976814 kernel: pnp: PnP ACPI: found 5 devices Jul 10 08:05:48.976824 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 08:05:48.976833 kernel: NET: Registered PF_INET protocol family Jul 10 08:05:48.976842 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 08:05:48.976854 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 08:05:48.976864 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 08:05:48.976873 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 08:05:48.976883 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 08:05:48.976892 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 08:05:48.976901 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 08:05:48.976910 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 08:05:48.976919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 08:05:48.976929 kernel: NET: Registered PF_XDP protocol family Jul 10 08:05:48.977514 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 08:05:48.977597 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 08:05:48.977673 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 08:05:48.977748 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 10 08:05:48.977822 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 10 08:05:48.977912 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 10 08:05:48.978021 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 08:05:48.978040 kernel: PCI: CLS 0 bytes, default 64 Jul 10 08:05:48.978049 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 08:05:48.978059 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 10 08:05:48.978068 kernel: Initialise system trusted keyrings Jul 10 08:05:48.978077 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 08:05:48.978087 kernel: Key type asymmetric registered Jul 10 08:05:48.978096 kernel: Asymmetric key parser 'x509' registered Jul 10 08:05:48.978105 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 08:05:48.978114 kernel: io scheduler mq-deadline registered Jul 10 08:05:48.978126 kernel: io scheduler kyber registered Jul 10 08:05:48.978135 kernel: io scheduler bfq registered Jul 10 08:05:48.978144 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 08:05:48.978155 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 10 08:05:48.978164 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 10 08:05:48.978174 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 10 08:05:48.978183 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 10 08:05:48.978192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 08:05:48.978202 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 08:05:48.978213 kernel: random: crng init done Jul 10 08:05:48.978222 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 08:05:48.978231 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 08:05:48.978240 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 08:05:48.978250 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 08:05:48.978355 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 08:05:48.978437 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 08:05:48.978515 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T08:05:48 UTC (1752134748) Jul 10 08:05:48.978597 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 08:05:48.978611 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 08:05:48.978620 kernel: NET: Registered PF_INET6 protocol family Jul 10 08:05:48.978629 kernel: Segment Routing with IPv6 Jul 10 08:05:48.978639 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 08:05:48.978648 kernel: NET: Registered PF_PACKET protocol family Jul 10 08:05:48.978657 kernel: Key type dns_resolver registered Jul 10 08:05:48.978666 kernel: IPI shorthand broadcast: enabled Jul 10 08:05:48.978675 kernel: sched_clock: Marking stable (3771007599, 189473721)->(3971728907, -11247587) Jul 10 08:05:48.978687 kernel: registered taskstats version 1 Jul 10 08:05:48.978697 kernel: Loading compiled-in X.509 certificates Jul 10 08:05:48.978706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b89e0dc22b3b76335f64d75ef999e68b43a7102' Jul 10 08:05:48.978715 kernel: Demotion targets for Node 0: null Jul 10 08:05:48.978724 kernel: Key type .fscrypt registered Jul 10 08:05:48.978733 kernel: Key type fscrypt-provisioning registered Jul 10 08:05:48.978742 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 08:05:48.978751 kernel: ima: Allocated hash algorithm: sha1 Jul 10 08:05:48.978760 kernel: ima: No architecture policies found Jul 10 08:05:48.978770 kernel: clk: Disabling unused clocks Jul 10 08:05:48.978779 kernel: Warning: unable to open an initial console. Jul 10 08:05:48.978789 kernel: Freeing unused kernel image (initmem) memory: 54600K Jul 10 08:05:48.978798 kernel: Write protecting the kernel read-only data: 24576k Jul 10 08:05:48.978807 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 08:05:48.978816 kernel: Run /init as init process Jul 10 08:05:48.978826 kernel: with arguments: Jul 10 08:05:48.978835 kernel: /init Jul 10 08:05:48.978844 kernel: with environment: Jul 10 08:05:48.978854 kernel: HOME=/ Jul 10 08:05:48.978863 kernel: TERM=linux Jul 10 08:05:48.978872 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 08:05:48.978882 systemd[1]: Successfully made /usr/ read-only. Jul 10 08:05:48.978895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 08:05:48.978906 systemd[1]: Detected virtualization kvm. Jul 10 08:05:48.978916 systemd[1]: Detected architecture x86-64. Jul 10 08:05:48.978934 systemd[1]: Running in initrd. Jul 10 08:05:48.978967 systemd[1]: No hostname configured, using default hostname. Jul 10 08:05:48.978979 systemd[1]: Hostname set to . Jul 10 08:05:48.978989 systemd[1]: Initializing machine ID from VM UUID. Jul 10 08:05:48.978999 systemd[1]: Queued start job for default target initrd.target. Jul 10 08:05:48.979009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 08:05:48.979023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 08:05:48.979034 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 08:05:48.979044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 08:05:48.979054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 08:05:48.979065 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 08:05:48.979076 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 08:05:48.979087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 08:05:48.979099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 08:05:48.979109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 08:05:48.979119 systemd[1]: Reached target paths.target - Path Units. Jul 10 08:05:48.979129 systemd[1]: Reached target slices.target - Slice Units. Jul 10 08:05:48.979139 systemd[1]: Reached target swap.target - Swaps. Jul 10 08:05:48.979149 systemd[1]: Reached target timers.target - Timer Units. Jul 10 08:05:48.979159 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 08:05:48.979169 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 08:05:48.979181 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 08:05:48.979191 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 08:05:48.979201 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 08:05:48.979212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 08:05:48.979223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 08:05:48.979233 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 08:05:48.979243 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 08:05:48.979253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 08:05:48.979265 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 08:05:48.979277 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 08:05:48.979288 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 08:05:48.979300 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 08:05:48.979310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 08:05:48.979320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 08:05:48.979332 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 08:05:48.979342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 08:05:48.979353 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 08:05:48.979363 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 08:05:48.979394 systemd-journald[214]: Collecting audit messages is disabled. Jul 10 08:05:48.979420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 08:05:48.979432 systemd-journald[214]: Journal started Jul 10 08:05:48.979459 systemd-journald[214]: Runtime Journal (/run/log/journal/bcdb1296d80a4e269373ef4785a0aeff) is 8M, max 78.5M, 70.5M free. Jul 10 08:05:48.958389 systemd-modules-load[215]: Inserted module 'overlay' Jul 10 08:05:48.989932 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 08:05:48.996975 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 08:05:48.999184 systemd-modules-load[215]: Inserted module 'br_netfilter' Jul 10 08:05:49.033406 kernel: Bridge firewalling registered Jul 10 08:05:49.033183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 08:05:49.034010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 08:05:49.037705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 08:05:49.039051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 08:05:49.043067 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 08:05:49.049051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 08:05:49.059366 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 08:05:49.062159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 08:05:49.065474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 08:05:49.067093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 08:05:49.073087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 08:05:49.074856 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 08:05:49.078066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 08:05:49.101214 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 08:05:49.123241 systemd-resolved[249]: Positive Trust Anchors: Jul 10 08:05:49.123915 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 08:05:49.123972 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 08:05:49.129631 systemd-resolved[249]: Defaulting to hostname 'linux'. Jul 10 08:05:49.130525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 08:05:49.131372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 08:05:49.190982 kernel: SCSI subsystem initialized Jul 10 08:05:49.201981 kernel: Loading iSCSI transport class v2.0-870. Jul 10 08:05:49.213988 kernel: iscsi: registered transport (tcp) Jul 10 08:05:49.237021 kernel: iscsi: registered transport (qla4xxx) Jul 10 08:05:49.237097 kernel: QLogic iSCSI HBA Driver Jul 10 08:05:49.262579 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 08:05:49.282832 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 08:05:49.286622 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 08:05:49.348426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 08:05:49.354398 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 08:05:49.448054 kernel: raid6: sse2x4 gen() 5646 MB/s Jul 10 08:05:49.466051 kernel: raid6: sse2x2 gen() 15137 MB/s Jul 10 08:05:49.484538 kernel: raid6: sse2x1 gen() 9801 MB/s Jul 10 08:05:49.484600 kernel: raid6: using algorithm sse2x2 gen() 15137 MB/s Jul 10 08:05:49.503437 kernel: raid6: .... xor() 9205 MB/s, rmw enabled Jul 10 08:05:49.503500 kernel: raid6: using ssse3x2 recovery algorithm Jul 10 08:05:49.526754 kernel: xor: measuring software checksum speed Jul 10 08:05:49.526823 kernel: prefetch64-sse : 17268 MB/sec Jul 10 08:05:49.527229 kernel: generic_sse : 16839 MB/sec Jul 10 08:05:49.528357 kernel: xor: using function: prefetch64-sse (17268 MB/sec) Jul 10 08:05:49.730268 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 08:05:49.739262 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 08:05:49.744608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 08:05:49.769442 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 10 08:05:49.775365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 08:05:49.782432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 08:05:49.810004 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 10 08:05:49.845131 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 08:05:49.851367 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 08:05:49.909784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 08:05:49.912518 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 08:05:50.005970 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 10 08:05:50.012265 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 10 08:05:50.032191 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 08:05:50.032238 kernel: GPT:17805311 != 20971519 Jul 10 08:05:50.032250 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 08:05:50.032262 kernel: GPT:17805311 != 20971519 Jul 10 08:05:50.032280 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 08:05:50.032291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 08:05:50.040984 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 08:05:50.047045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 08:05:50.047195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 08:05:50.049297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 08:05:50.057212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 08:05:50.059680 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 08:05:50.062021 kernel: libata version 3.00 loaded. Jul 10 08:05:50.065698 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 10 08:05:50.069060 kernel: scsi host0: ata_piix Jul 10 08:05:50.072979 kernel: scsi host1: ata_piix Jul 10 08:05:50.081317 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 10 08:05:50.081346 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 10 08:05:50.122028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 08:05:50.150666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 08:05:50.169342 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 08:05:50.179966 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 08:05:50.188399 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 08:05:50.189029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 08:05:50.192043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 08:05:50.233408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 08:05:50.236128 disk-uuid[562]: Primary Header is updated. Jul 10 08:05:50.236128 disk-uuid[562]: Secondary Entries is updated. Jul 10 08:05:50.236128 disk-uuid[562]: Secondary Header is updated. Jul 10 08:05:50.388337 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 08:05:50.392349 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 08:05:50.393021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 08:05:50.394223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 08:05:50.396209 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 08:05:50.420858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 08:05:51.268029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 08:05:51.271596 disk-uuid[563]: The operation has completed successfully. Jul 10 08:05:51.349831 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 08:05:51.349942 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 08:05:51.397053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 08:05:51.423176 sh[587]: Success Jul 10 08:05:51.467312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 08:05:51.467417 kernel: device-mapper: uevent: version 1.0.3 Jul 10 08:05:51.470799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 08:05:51.499066 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 10 08:05:51.574432 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 08:05:51.581122 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 08:05:51.602083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 08:05:51.629018 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 08:05:51.637057 kernel: BTRFS: device fsid 511ba16f-9623-4757-a014-7759f3bcc596 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (599) Jul 10 08:05:51.645035 kernel: BTRFS info (device dm-0): first mount of filesystem 511ba16f-9623-4757-a014-7759f3bcc596 Jul 10 08:05:51.645098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 08:05:51.649202 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 08:05:51.669204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 08:05:51.671238 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 08:05:51.673204 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 08:05:51.676169 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 08:05:51.683169 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 08:05:51.737024 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (634) Jul 10 08:05:51.744910 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 08:05:51.745016 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 08:05:51.747109 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 08:05:51.763048 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 08:05:51.766489 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 08:05:51.771066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 08:05:51.830278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 08:05:51.833824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 08:05:51.867022 systemd-networkd[768]: lo: Link UP Jul 10 08:05:51.867030 systemd-networkd[768]: lo: Gained carrier Jul 10 08:05:51.868105 systemd-networkd[768]: Enumeration completed Jul 10 08:05:51.869270 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 08:05:51.869333 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 08:05:51.869337 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 08:05:51.870043 systemd[1]: Reached target network.target - Network. Jul 10 08:05:51.871182 systemd-networkd[768]: eth0: Link UP Jul 10 08:05:51.871186 systemd-networkd[768]: eth0: Gained carrier Jul 10 08:05:51.871197 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 08:05:51.886237 systemd-networkd[768]: eth0: DHCPv4 address 172.24.4.5/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 10 08:05:51.981224 ignition[697]: Ignition 2.21.0 Jul 10 08:05:51.981242 ignition[697]: Stage: fetch-offline Jul 10 08:05:51.981276 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 10 08:05:51.981284 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:05:51.983222 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 08:05:51.981367 ignition[697]: parsed url from cmdline: "" Jul 10 08:05:51.986203 systemd-resolved[249]: Detected conflict on linux IN A 172.24.4.5 Jul 10 08:05:51.981372 ignition[697]: no config URL provided Jul 10 08:05:51.986213 systemd-resolved[249]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 10 08:05:51.981377 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 08:05:51.987137 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 08:05:51.981385 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 10 08:05:51.981390 ignition[697]: failed to fetch config: resource requires networking Jul 10 08:05:51.981655 ignition[697]: Ignition finished successfully Jul 10 08:05:52.010155 ignition[780]: Ignition 2.21.0 Jul 10 08:05:52.010172 ignition[780]: Stage: fetch Jul 10 08:05:52.010332 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 10 08:05:52.010344 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:05:52.010438 ignition[780]: parsed url from cmdline: "" Jul 10 08:05:52.010442 ignition[780]: no config URL provided Jul 10 08:05:52.010448 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 08:05:52.010456 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jul 10 08:05:52.010571 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 10 08:05:52.011393 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 10 08:05:52.011454 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 10 08:05:52.406859 ignition[780]: GET result: OK Jul 10 08:05:52.408796 ignition[780]: parsing config with SHA512: a9cc73f30d347976aa983fe2763279302dea8ab8404941a304c526b7b56db33dc878f95caee6206e4f7321b9db0d8645c3a32f7e6c584f5981a25c380c4f6255 Jul 10 08:05:52.423037 unknown[780]: fetched base config from "system" Jul 10 08:05:52.423049 unknown[780]: fetched base config from "system" Jul 10 08:05:52.423554 ignition[780]: fetch: fetch complete Jul 10 08:05:52.423055 unknown[780]: fetched user config from "openstack" Jul 10 08:05:52.423560 ignition[780]: fetch: fetch passed Jul 10 08:05:52.426343 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 08:05:52.423602 ignition[780]: Ignition finished successfully Jul 10 08:05:52.431100 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 08:05:52.465213 ignition[787]: Ignition 2.21.0 Jul 10 08:05:52.465230 ignition[787]: Stage: kargs Jul 10 08:05:52.465394 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 10 08:05:52.470375 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 08:05:52.465406 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:05:52.466583 ignition[787]: kargs: kargs passed Jul 10 08:05:52.474094 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 08:05:52.466634 ignition[787]: Ignition finished successfully Jul 10 08:05:52.515697 ignition[794]: Ignition 2.21.0 Jul 10 08:05:52.517260 ignition[794]: Stage: disks Jul 10 08:05:52.517483 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 10 08:05:52.517495 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:05:52.519222 ignition[794]: disks: disks passed Jul 10 08:05:52.520707 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 08:05:52.519285 ignition[794]: Ignition finished successfully Jul 10 08:05:52.522773 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 08:05:52.524091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 08:05:52.525912 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 08:05:52.527638 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 08:05:52.529841 systemd[1]: Reached target basic.target - Basic System. Jul 10 08:05:52.532839 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 08:05:52.577538 systemd-fsck[802]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 10 08:05:52.590280 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 08:05:52.596526 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 08:05:52.766970 kernel: EXT4-fs (vda9): mounted filesystem f2872d8e-bdd9-4186-89ae-300fdf795a28 r/w with ordered data mode. Quota mode: none. Jul 10 08:05:52.767982 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 08:05:52.768976 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 08:05:52.771061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 08:05:52.785612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 08:05:52.787768 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 08:05:52.790060 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 10 08:05:52.791820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 08:05:52.791850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 08:05:52.796985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (810) Jul 10 08:05:52.799977 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 08:05:52.801066 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 08:05:52.808554 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 08:05:52.808596 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 08:05:52.814188 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 08:05:52.829694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 08:05:52.964986 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:05:52.971750 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 08:05:52.979491 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jul 10 08:05:52.989763 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 08:05:53.001183 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 08:05:53.148889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 08:05:53.153737 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 08:05:53.156043 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 08:05:53.167867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 08:05:53.171987 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 08:05:53.190650 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 08:05:53.199006 ignition[928]: INFO : Ignition 2.21.0 Jul 10 08:05:53.199006 ignition[928]: INFO : Stage: mount Jul 10 08:05:53.200159 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 08:05:53.200159 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:05:53.201756 ignition[928]: INFO : mount: mount passed Jul 10 08:05:53.202267 ignition[928]: INFO : Ignition finished successfully Jul 10 08:05:53.203044 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 08:05:53.825571 systemd-networkd[768]: eth0: Gained IPv6LL Jul 10 08:05:54.006047 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:05:56.022023 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:00.042682 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:00.060739 coreos-metadata[812]: Jul 10 08:06:00.060 WARN failed to locate config-drive, using the metadata service API instead Jul 10 08:06:00.108078 coreos-metadata[812]: Jul 10 08:06:00.107 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 10 08:06:00.126165 coreos-metadata[812]: Jul 10 08:06:00.126 INFO Fetch successful Jul 10 08:06:00.127129 coreos-metadata[812]: Jul 10 08:06:00.126 INFO wrote hostname ci-4391-0-0-n-29a01ddc69.novalocal to /sysroot/etc/hostname Jul 10 08:06:00.138615 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 10 08:06:00.139232 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 10 08:06:00.145138 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 08:06:00.189179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 08:06:00.237046 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (945) Jul 10 08:06:00.247192 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 08:06:00.247277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 08:06:00.252825 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 08:06:00.270306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 08:06:00.340408 ignition[963]: INFO : Ignition 2.21.0 Jul 10 08:06:00.341461 ignition[963]: INFO : Stage: files Jul 10 08:06:00.342382 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 08:06:00.345006 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:06:00.345717 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jul 10 08:06:00.350156 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 08:06:00.350979 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 08:06:00.361916 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 08:06:00.363201 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 08:06:00.364940 unknown[963]: wrote ssh authorized keys file for user: core Jul 10 08:06:00.365732 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 08:06:00.376290 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 08:06:00.376290 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 08:06:00.983179 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 08:06:06.170145 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 08:06:06.184074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 08:06:06.184074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 08:06:06.184074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 08:06:06.184074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 08:06:06.184074 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 08:06:06.194850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 08:06:06.194850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 08:06:06.194850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 08:06:06.201441 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 08:06:06.203701 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 08:06:06.203701 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 08:06:06.208756 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 08:06:06.208756 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 08:06:06.208756 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 08:06:07.000210 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 08:06:09.205834 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 08:06:09.205834 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 08:06:09.211277 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 08:06:09.217145 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 08:06:09.217145 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 08:06:09.217145 ignition[963]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 10 08:06:09.222706 ignition[963]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 08:06:09.222706 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 08:06:09.222706 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 08:06:09.222706 ignition[963]: INFO : files: files passed Jul 10 08:06:09.222706 ignition[963]: INFO : Ignition finished successfully Jul 10 08:06:09.226379 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 08:06:09.237923 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 08:06:09.242277 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 08:06:09.267403 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 08:06:09.267659 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 08:06:09.272396 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 08:06:09.272396 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 08:06:09.277708 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 08:06:09.280558 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 08:06:09.283337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 08:06:09.288435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 08:06:09.356722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 08:06:09.357069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 08:06:09.360297 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 08:06:09.362878 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 08:06:09.366060 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 08:06:09.368091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 08:06:09.417479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 08:06:09.424710 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 08:06:09.467996 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 08:06:09.471334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 08:06:09.473176 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 08:06:09.475062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 08:06:09.475530 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 08:06:09.479046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 08:06:09.481118 systemd[1]: Stopped target basic.target - Basic System. Jul 10 08:06:09.483628 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 08:06:09.486752 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 08:06:09.489460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 08:06:09.492220 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 08:06:09.495370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 08:06:09.498428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 08:06:09.501436 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 08:06:09.504581 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 08:06:09.507516 systemd[1]: Stopped target swap.target - Swaps. Jul 10 08:06:09.510459 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 08:06:09.510918 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 08:06:09.514251 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 08:06:09.516368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 08:06:09.518891 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 08:06:09.519466 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 08:06:09.521582 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 08:06:09.522011 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 08:06:09.525900 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 08:06:09.526434 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 08:06:09.530316 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 08:06:09.530757 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 08:06:09.536390 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 08:06:09.538776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 08:06:09.541418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 08:06:09.551253 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 08:06:09.555801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 08:06:09.556344 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 08:06:09.560262 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 08:06:09.560644 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 08:06:09.568132 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 08:06:09.570146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 08:06:09.595154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 08:06:09.602353 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 08:06:09.602488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 08:06:09.606327 ignition[1017]: INFO : Ignition 2.21.0 Jul 10 08:06:09.606327 ignition[1017]: INFO : Stage: umount Jul 10 08:06:09.607497 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 08:06:09.607497 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 10 08:06:09.609858 ignition[1017]: INFO : umount: umount passed Jul 10 08:06:09.609858 ignition[1017]: INFO : Ignition finished successfully Jul 10 08:06:09.609625 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 08:06:09.609728 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 08:06:09.610674 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 08:06:09.610757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 08:06:09.611490 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 08:06:09.611549 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 08:06:09.612537 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 08:06:09.612584 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 08:06:09.613585 systemd[1]: Stopped target network.target - Network. Jul 10 08:06:09.614541 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 08:06:09.614591 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 08:06:09.615560 systemd[1]: Stopped target paths.target - Path Units. Jul 10 08:06:09.616500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 08:06:09.616744 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 08:06:09.617608 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 08:06:09.618603 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 08:06:09.619589 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 08:06:09.619643 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 08:06:09.620582 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 08:06:09.620655 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 08:06:09.621631 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 08:06:09.621693 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 08:06:09.622780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 08:06:09.622830 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 08:06:09.623990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 08:06:09.624038 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 08:06:09.625217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 08:06:09.626772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 08:06:09.640095 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 08:06:09.640216 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 08:06:09.644300 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 08:06:09.644542 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 08:06:09.644686 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 08:06:09.646940 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 08:06:09.647384 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 08:06:09.648366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 08:06:09.648418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 08:06:09.650421 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 08:06:09.651571 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 08:06:09.651620 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 08:06:09.654315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 08:06:09.654361 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 08:06:09.655849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 08:06:09.655897 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 08:06:09.656675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 08:06:09.656719 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 08:06:09.660154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 08:06:09.662481 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 08:06:09.662554 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 08:06:09.675682 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 08:06:09.677373 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 08:06:09.678341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 08:06:09.678383 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 08:06:09.679671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 08:06:09.679706 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 08:06:09.680791 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 08:06:09.680848 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 08:06:09.682402 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 08:06:09.682448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 08:06:09.683562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 08:06:09.683610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 08:06:09.685518 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 08:06:09.686599 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 08:06:09.686652 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 08:06:09.691069 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 08:06:09.691119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 08:06:09.692685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 08:06:09.692744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 08:06:09.696157 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 08:06:09.696212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 08:06:09.696257 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 08:06:09.696555 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 08:06:09.701049 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 08:06:09.706153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 08:06:09.706925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 08:06:09.708398 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 08:06:09.710680 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 08:06:09.724229 systemd[1]: Switching root. Jul 10 08:06:09.768338 systemd-journald[214]: Journal stopped Jul 10 08:06:11.957325 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jul 10 08:06:11.957417 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 08:06:11.957450 kernel: SELinux: policy capability open_perms=1 Jul 10 08:06:11.957463 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 08:06:11.961040 kernel: SELinux: policy capability always_check_network=0 Jul 10 08:06:11.961059 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 08:06:11.961082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 08:06:11.961096 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 08:06:11.961107 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 08:06:11.961131 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 08:06:11.961144 kernel: audit: type=1403 audit(1752134770.812:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 08:06:11.961158 systemd[1]: Successfully loaded SELinux policy in 122.674ms. Jul 10 08:06:11.961197 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.313ms. Jul 10 08:06:11.961213 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 08:06:11.961227 systemd[1]: Detected virtualization kvm. Jul 10 08:06:11.961240 systemd[1]: Detected architecture x86-64. Jul 10 08:06:11.961253 systemd[1]: Detected first boot. Jul 10 08:06:11.961267 systemd[1]: Hostname set to . Jul 10 08:06:11.961279 systemd[1]: Initializing machine ID from VM UUID. Jul 10 08:06:11.961294 zram_generator::config[1062]: No configuration found. Jul 10 08:06:11.961317 kernel: Guest personality initialized and is inactive Jul 10 08:06:11.961330 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 08:06:11.961362 kernel: Initialized host personality Jul 10 08:06:11.961376 kernel: NET: Registered PF_VSOCK protocol family Jul 10 08:06:11.961394 systemd[1]: Populated /etc with preset unit settings. Jul 10 08:06:11.961409 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 08:06:11.961423 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 08:06:11.961436 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 08:06:11.961449 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 08:06:11.961488 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 08:06:11.961503 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 08:06:11.961517 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 08:06:11.961530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 08:06:11.961566 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 08:06:11.961580 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 08:06:11.961594 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 08:06:11.961607 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 08:06:11.961629 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 08:06:11.961644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 08:06:11.961670 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 08:06:11.961690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 08:06:11.961704 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 08:06:11.961733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 08:06:11.961746 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 08:06:11.961781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 08:06:11.961797 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 08:06:11.961810 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 08:06:11.961823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 08:06:11.961838 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 08:06:11.961851 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 08:06:11.961865 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 08:06:11.961878 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 08:06:11.961891 systemd[1]: Reached target slices.target - Slice Units. Jul 10 08:06:11.962418 systemd[1]: Reached target swap.target - Swaps. Jul 10 08:06:11.962436 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 08:06:11.962456 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 08:06:11.962469 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 08:06:11.962482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 08:06:11.962496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 08:06:11.962509 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 08:06:11.962522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 08:06:11.962536 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 08:06:11.962577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 08:06:11.962593 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 08:06:11.962606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:11.962619 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 08:06:11.962633 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 08:06:11.962645 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 08:06:11.962660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 08:06:11.962685 systemd[1]: Reached target machines.target - Containers. Jul 10 08:06:11.962707 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 08:06:11.962721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 08:06:11.962734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 08:06:11.962747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 08:06:11.962760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 08:06:11.962774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 08:06:11.962787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 08:06:11.962799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 08:06:11.962813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 08:06:11.962835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 08:06:11.962849 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 08:06:11.962862 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 08:06:11.962876 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 08:06:11.962912 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 08:06:11.962928 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 08:06:11.962941 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 08:06:11.964017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 08:06:11.964067 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 08:06:11.964088 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 08:06:11.964110 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 08:06:11.964142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 08:06:11.964156 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 08:06:11.964168 systemd[1]: Stopped verity-setup.service. Jul 10 08:06:11.964181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:11.964193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 08:06:11.964205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 08:06:11.964218 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 08:06:11.964249 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 08:06:11.964264 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 08:06:11.964276 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 08:06:11.964288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 08:06:11.964319 kernel: loop: module loaded Jul 10 08:06:11.964332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 08:06:11.964344 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 08:06:11.964356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 08:06:11.964368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 08:06:11.964389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 08:06:11.964402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 08:06:11.964415 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 08:06:11.964427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 08:06:11.964441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 08:06:11.964489 systemd-journald[1148]: Collecting audit messages is disabled. Jul 10 08:06:11.964519 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 08:06:11.964533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 08:06:11.964578 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 08:06:11.964597 systemd-journald[1148]: Journal started Jul 10 08:06:11.964624 systemd-journald[1148]: Runtime Journal (/run/log/journal/bcdb1296d80a4e269373ef4785a0aeff) is 8M, max 78.5M, 70.5M free. Jul 10 08:06:11.601547 systemd[1]: Queued start job for default target multi-user.target. Jul 10 08:06:11.969062 kernel: fuse: init (API version 7.41) Jul 10 08:06:11.969144 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 08:06:11.614985 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 08:06:11.615585 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 08:06:11.977982 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 08:06:11.984808 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 08:06:11.987990 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 08:06:11.995988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 08:06:12.001964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 08:06:12.009012 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 08:06:12.009120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 08:06:12.034972 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 08:06:12.035047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 08:06:12.042989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 08:06:12.053738 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 08:06:12.055984 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 08:06:12.059486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 08:06:12.061516 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 08:06:12.061693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 08:06:12.061975 kernel: ACPI: bus type drm_connector registered Jul 10 08:06:12.062632 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 08:06:12.064623 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 08:06:12.067939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 08:06:12.068766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 08:06:12.089082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 08:06:12.096914 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 08:06:12.119049 systemd-journald[1148]: Time spent on flushing to /var/log/journal/bcdb1296d80a4e269373ef4785a0aeff is 49.856ms for 971 entries. Jul 10 08:06:12.119049 systemd-journald[1148]: System Journal (/var/log/journal/bcdb1296d80a4e269373ef4785a0aeff) is 8M, max 584.8M, 576.8M free. Jul 10 08:06:12.244549 systemd-journald[1148]: Received client request to flush runtime journal. Jul 10 08:06:12.244605 kernel: loop0: detected capacity change from 0 to 8 Jul 10 08:06:12.244630 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 08:06:12.244650 kernel: loop1: detected capacity change from 0 to 224512 Jul 10 08:06:12.144935 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 08:06:12.146171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 08:06:12.152088 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 08:06:12.164571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 08:06:12.179721 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 08:06:12.249178 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 08:06:12.269751 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 08:06:12.304108 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 08:06:12.311199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 08:06:12.321666 kernel: loop2: detected capacity change from 0 to 114000 Jul 10 08:06:12.367129 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 10 08:06:12.367152 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 10 08:06:12.374977 kernel: loop3: detected capacity change from 0 to 146488 Jul 10 08:06:12.376785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 08:06:12.448206 kernel: loop4: detected capacity change from 0 to 8 Jul 10 08:06:12.454979 kernel: loop5: detected capacity change from 0 to 224512 Jul 10 08:06:12.533990 kernel: loop6: detected capacity change from 0 to 114000 Jul 10 08:06:12.571983 kernel: loop7: detected capacity change from 0 to 146488 Jul 10 08:06:12.617328 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 08:06:12.624213 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 08:06:12.648347 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 08:06:12.658257 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 10 08:06:12.659436 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 10 08:06:12.667622 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 08:06:12.667642 systemd[1]: Reloading... Jul 10 08:06:12.806001 zram_generator::config[1251]: No configuration found. Jul 10 08:06:12.944267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 08:06:13.080216 systemd[1]: Reloading finished in 412 ms. Jul 10 08:06:13.097483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 08:06:13.106287 systemd[1]: Starting ensure-sysext.service... Jul 10 08:06:13.109607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 08:06:13.152043 systemd[1]: Reload requested from client PID 1306 ('systemctl') (unit ensure-sysext.service)... Jul 10 08:06:13.152060 systemd[1]: Reloading... Jul 10 08:06:13.168556 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 08:06:13.170419 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 08:06:13.170700 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 08:06:13.172382 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 08:06:13.175070 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 08:06:13.175485 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 10 08:06:13.175619 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jul 10 08:06:13.190680 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 08:06:13.190692 systemd-tmpfiles[1307]: Skipping /boot Jul 10 08:06:13.228185 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 08:06:13.228200 systemd-tmpfiles[1307]: Skipping /boot Jul 10 08:06:13.251438 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 08:06:13.275016 zram_generator::config[1331]: No configuration found. Jul 10 08:06:13.422741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 08:06:13.527096 systemd[1]: Reloading finished in 374 ms. Jul 10 08:06:13.539519 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 08:06:13.540514 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 08:06:13.546672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 08:06:13.564134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 08:06:13.567372 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 08:06:13.569729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 08:06:13.575818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 08:06:13.583091 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 08:06:13.587171 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 08:06:13.596378 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.596595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 08:06:13.598114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 08:06:13.606596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 08:06:13.609887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 08:06:13.611239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 08:06:13.611387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 08:06:13.620376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 08:06:13.621073 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.628101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 08:06:13.631409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 08:06:13.631836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 08:06:13.633458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 08:06:13.635029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 08:06:13.650745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.652037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 08:06:13.654468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 08:06:13.659226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 08:06:13.661112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 08:06:13.661368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 08:06:13.666436 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 08:06:13.668005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.669810 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 08:06:13.671013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 08:06:13.675299 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 08:06:13.684748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.686111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 08:06:13.691741 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 08:06:13.698196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 08:06:13.699153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 08:06:13.699289 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 08:06:13.699462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 08:06:13.703334 systemd[1]: Finished ensure-sysext.service. Jul 10 08:06:13.713210 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 08:06:13.716208 augenrules[1433]: No rules Jul 10 08:06:13.717703 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 08:06:13.718025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 08:06:13.729356 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 08:06:13.741302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 08:06:13.742646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 08:06:13.745082 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 08:06:13.746765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 08:06:13.747065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 08:06:13.751542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 08:06:13.754424 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 08:06:13.756150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 08:06:13.762171 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jul 10 08:06:13.764692 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 08:06:13.765015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 08:06:13.767142 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 08:06:13.768662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 08:06:13.768728 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 08:06:13.819463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 08:06:13.823793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 08:06:13.824468 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 08:06:13.826607 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 08:06:13.861324 systemd-resolved[1397]: Positive Trust Anchors: Jul 10 08:06:13.861357 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 08:06:13.861402 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 08:06:13.869919 systemd-resolved[1397]: Using system hostname 'ci-4391-0-0-n-29a01ddc69.novalocal'. Jul 10 08:06:13.873076 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 08:06:13.874089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 08:06:13.875037 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 08:06:13.875674 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 08:06:13.877042 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 08:06:13.877586 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 08:06:13.879231 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 08:06:13.879804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 08:06:13.881006 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 08:06:13.881567 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 08:06:13.881602 systemd[1]: Reached target paths.target - Path Units. Jul 10 08:06:13.882085 systemd[1]: Reached target timers.target - Timer Units. Jul 10 08:06:13.885654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 08:06:13.889654 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 08:06:13.895539 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 08:06:13.897202 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 08:06:13.898330 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 08:06:13.909641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 08:06:13.911641 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 08:06:13.913856 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 08:06:13.922796 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 08:06:13.923396 systemd[1]: Reached target basic.target - Basic System. Jul 10 08:06:13.923918 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 08:06:13.923992 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 08:06:13.926427 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 08:06:13.928400 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 08:06:13.931583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 08:06:13.945096 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 08:06:13.954127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 08:06:13.956220 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:13.961338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 08:06:13.971895 systemd-networkd[1456]: lo: Link UP Jul 10 08:06:13.971900 systemd-networkd[1456]: lo: Gained carrier Jul 10 08:06:13.973175 systemd-networkd[1456]: Enumeration completed Jul 10 08:06:13.981233 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 08:06:13.984678 jq[1485]: false Jul 10 08:06:13.987704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 08:06:13.996347 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 08:06:14.000208 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 08:06:14.004004 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 08:06:14.012515 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 08:06:14.014352 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 08:06:14.025461 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 08:06:14.030210 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 08:06:14.036179 extend-filesystems[1486]: Found /dev/vda6 Jul 10 08:06:14.043019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 08:06:14.044215 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 08:06:14.046030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 08:06:14.047263 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 08:06:14.047517 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 08:06:14.048138 extend-filesystems[1486]: Found /dev/vda9 Jul 10 08:06:14.064420 extend-filesystems[1486]: Checking size of /dev/vda9 Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing passwd entry cache Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting users, quitting Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Refreshing group entry cache Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Failure getting groups, quitting Jul 10 08:06:14.070245 google_oslogin_nss_cache[1490]: oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 08:06:14.048575 oslogin_cache_refresh[1490]: Refreshing passwd entry cache Jul 10 08:06:14.051042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 08:06:14.056585 oslogin_cache_refresh[1490]: Failure getting users, quitting Jul 10 08:06:14.051263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 08:06:14.056608 oslogin_cache_refresh[1490]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 08:06:14.066322 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 08:06:14.056678 oslogin_cache_refresh[1490]: Refreshing group entry cache Jul 10 08:06:14.066565 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 08:06:14.057324 oslogin_cache_refresh[1490]: Failure getting groups, quitting Jul 10 08:06:14.057333 oslogin_cache_refresh[1490]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 08:06:14.079418 systemd[1]: Reached target network.target - Network. Jul 10 08:06:14.088905 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 08:06:14.098224 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 08:06:14.107506 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 08:06:14.116037 jq[1501]: true Jul 10 08:06:14.124781 tar[1505]: linux-amd64/LICENSE Jul 10 08:06:14.125113 tar[1505]: linux-amd64/helm Jul 10 08:06:14.139779 extend-filesystems[1486]: Resized partition /dev/vda9 Jul 10 08:06:14.143256 dbus-daemon[1481]: [system] SELinux support is enabled Jul 10 08:06:14.143412 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 08:06:14.146932 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 08:06:14.147313 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 08:06:14.147935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 08:06:14.147980 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 08:06:14.165697 extend-filesystems[1534]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 08:06:14.166616 update_engine[1500]: I20250710 08:06:14.163478 1500 main.cc:92] Flatcar Update Engine starting Jul 10 08:06:14.173638 systemd[1]: Started update-engine.service - Update Engine. Jul 10 08:06:14.183056 update_engine[1500]: I20250710 08:06:14.174113 1500 update_check_scheduler.cc:74] Next update check in 7m6s Jul 10 08:06:14.180831 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 08:06:14.188580 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 08:06:14.188858 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 08:06:14.203302 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 10 08:06:14.205083 jq[1526]: true Jul 10 08:06:14.213421 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 10 08:06:14.210913 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 08:06:14.232641 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 08:06:14.256664 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 08:06:14.256664 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 08:06:14.256664 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 10 08:06:14.265048 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Jul 10 08:06:14.257806 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 08:06:14.258168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 08:06:14.296485 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 08:06:14.357208 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Jul 10 08:06:14.353865 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 08:06:14.358389 systemd[1]: Starting sshkeys.service... Jul 10 08:06:14.445876 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 10 08:06:14.451244 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 10 08:06:14.525192 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:14.575615 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 08:06:14.584875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 08:06:14.598542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 08:06:14.811740 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 08:06:14.821790 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 08:06:14.852431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 08:06:14.885329 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 08:06:14.886358 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 08:06:14.891738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 08:06:14.925218 containerd[1541]: time="2025-07-10T08:06:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 08:06:14.927818 containerd[1541]: time="2025-07-10T08:06:14.927286830Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 10 08:06:14.938983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 08:06:14.944251 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 08:06:14.949096 kernel: ACPI: button: Power Button [PWRF] Jul 10 08:06:14.978356 containerd[1541]: time="2025-07-10T08:06:14.974932510Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.143µs" Jul 10 08:06:14.978356 containerd[1541]: time="2025-07-10T08:06:14.978216869Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 08:06:14.978356 containerd[1541]: time="2025-07-10T08:06:14.978253017Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 08:06:14.979156 containerd[1541]: time="2025-07-10T08:06:14.978844647Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 08:06:14.979156 containerd[1541]: time="2025-07-10T08:06:14.978875214Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 08:06:14.979156 containerd[1541]: time="2025-07-10T08:06:14.978915369Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 08:06:14.980148 containerd[1541]: time="2025-07-10T08:06:14.979984264Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 08:06:14.980148 containerd[1541]: time="2025-07-10T08:06:14.980034649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 08:06:14.981898 containerd[1541]: time="2025-07-10T08:06:14.980880325Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 08:06:14.981898 containerd[1541]: time="2025-07-10T08:06:14.980914158Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 08:06:14.981898 containerd[1541]: time="2025-07-10T08:06:14.980931961Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 08:06:14.981898 containerd[1541]: time="2025-07-10T08:06:14.980943353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 08:06:14.981898 containerd[1541]: time="2025-07-10T08:06:14.981057447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 08:06:14.989020 containerd[1541]: time="2025-07-10T08:06:14.988978904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 08:06:14.989349 containerd[1541]: time="2025-07-10T08:06:14.989326866Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 08:06:14.989561 containerd[1541]: time="2025-07-10T08:06:14.989541819Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 08:06:14.989666 containerd[1541]: time="2025-07-10T08:06:14.989648499Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 08:06:14.990484 containerd[1541]: time="2025-07-10T08:06:14.990458418Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 08:06:14.991078 containerd[1541]: time="2025-07-10T08:06:14.991058003Z" level=info msg="metadata content store policy set" policy=shared Jul 10 08:06:14.996611 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 08:06:14.996622 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 08:06:14.999034 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 08:06:15.000857 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004082300Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004154766Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004174142Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004190192Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004208396Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004221411Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004234966Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004253962Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004265974Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004278067Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004303425Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 08:06:15.005930 containerd[1541]: time="2025-07-10T08:06:15.004318403Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 08:06:15.004705 systemd-networkd[1456]: eth0: Link UP Jul 10 08:06:15.005992 systemd-logind[1499]: New seat seat0. Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007290707Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007340811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007361741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007375316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007387148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007399812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007412556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007424638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007437543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007449645Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 08:06:15.007918 containerd[1541]: time="2025-07-10T08:06:15.007462389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 08:06:15.010455 containerd[1541]: time="2025-07-10T08:06:15.009235685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 08:06:15.010455 containerd[1541]: time="2025-07-10T08:06:15.009268477Z" level=info msg="Start snapshots syncer" Jul 10 08:06:15.010455 containerd[1541]: time="2025-07-10T08:06:15.009302741Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 08:06:15.010168 systemd-networkd[1456]: eth0: Gained carrier Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009575102Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009636657Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009702671Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009827084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009852111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009863793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009875896Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009887798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009899741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 08:06:15.010573 containerd[1541]: time="2025-07-10T08:06:15.009911282Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 08:06:15.010208 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 08:06:15.011977 containerd[1541]: time="2025-07-10T08:06:15.011932142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012097723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012336671Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012385973Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012406762Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012427692Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012441337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012450615Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012461335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012472596Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012492133Z" level=info msg="runtime interface created" Jul 10 08:06:15.012667 containerd[1541]: time="2025-07-10T08:06:15.012498074Z" level=info msg="created NRI interface" Jul 10 08:06:15.014050 containerd[1541]: time="2025-07-10T08:06:15.012507662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 08:06:15.014050 containerd[1541]: time="2025-07-10T08:06:15.013913037Z" level=info msg="Connect containerd service" Jul 10 08:06:15.014050 containerd[1541]: time="2025-07-10T08:06:15.013968451Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 08:06:15.015408 containerd[1541]: time="2025-07-10T08:06:15.015381962Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 08:06:15.050208 systemd-networkd[1456]: eth0: DHCPv4 address 172.24.4.5/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 10 08:06:15.051204 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 10 08:06:15.057613 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 10 08:06:15.064305 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 08:06:15.051356 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 10 08:06:15.245737 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 08:06:15.257172 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 08:06:15.259371 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 08:06:15.274289 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 08:06:15.582488 systemd-logind[1499]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 08:06:15.592909 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 08:06:15.594746 containerd[1541]: time="2025-07-10T08:06:15.594630763Z" level=info msg="Start subscribing containerd event" Jul 10 08:06:15.595781 containerd[1541]: time="2025-07-10T08:06:15.595740715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 08:06:15.595901 containerd[1541]: time="2025-07-10T08:06:15.595883513Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 08:06:15.596486 containerd[1541]: time="2025-07-10T08:06:15.596355538Z" level=info msg="Start recovering state" Jul 10 08:06:15.596663 containerd[1541]: time="2025-07-10T08:06:15.596636235Z" level=info msg="Start event monitor" Jul 10 08:06:15.596715 containerd[1541]: time="2025-07-10T08:06:15.596684325Z" level=info msg="Start cni network conf syncer for default" Jul 10 08:06:15.596741 containerd[1541]: time="2025-07-10T08:06:15.596717898Z" level=info msg="Start streaming server" Jul 10 08:06:15.596741 containerd[1541]: time="2025-07-10T08:06:15.596737334Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 08:06:15.596798 containerd[1541]: time="2025-07-10T08:06:15.596774013Z" level=info msg="runtime interface starting up..." Jul 10 08:06:15.596798 containerd[1541]: time="2025-07-10T08:06:15.596790244Z" level=info msg="starting plugins..." Jul 10 08:06:15.596850 containerd[1541]: time="2025-07-10T08:06:15.596817765Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 08:06:15.598304 containerd[1541]: time="2025-07-10T08:06:15.598274648Z" level=info msg="containerd successfully booted in 0.679198s" Jul 10 08:06:15.605638 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 08:06:15.690983 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 10 08:06:15.693201 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 10 08:06:15.698717 kernel: Console: switching to colour dummy device 80x25 Jul 10 08:06:15.700908 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 10 08:06:15.700945 kernel: [drm] features: -context_init Jul 10 08:06:15.702978 kernel: [drm] number of scanouts: 1 Jul 10 08:06:15.704069 kernel: [drm] number of cap sets: 0 Jul 10 08:06:15.709029 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 10 08:06:15.736038 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 08:06:15.739864 systemd[1]: Started sshd@0-172.24.4.5:22-172.24.4.1:50066.service - OpenSSH per-connection server daemon (172.24.4.1:50066). Jul 10 08:06:15.744259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 08:06:15.874578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 08:06:15.935856 tar[1505]: linux-amd64/README.md Jul 10 08:06:15.967041 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 08:06:16.737662 systemd-networkd[1456]: eth0: Gained IPv6LL Jul 10 08:06:16.740713 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 10 08:06:16.745676 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 08:06:16.748485 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 08:06:16.753742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:06:16.757619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 08:06:16.775031 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:16.786046 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:16.844006 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 08:06:17.282575 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 50066 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:17.287132 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:17.312795 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 08:06:17.316189 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 08:06:17.339113 systemd-logind[1499]: New session 1 of user core. Jul 10 08:06:17.358671 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 08:06:17.365334 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 08:06:17.382410 (systemd)[1661]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 08:06:17.385843 systemd-logind[1499]: New session c1 of user core. Jul 10 08:06:17.595012 systemd[1661]: Queued start job for default target default.target. Jul 10 08:06:17.600182 systemd[1661]: Created slice app.slice - User Application Slice. Jul 10 08:06:17.600217 systemd[1661]: Reached target paths.target - Paths. Jul 10 08:06:17.600389 systemd[1661]: Reached target timers.target - Timers. Jul 10 08:06:17.604550 systemd[1661]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 08:06:17.631764 systemd[1661]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 08:06:17.631930 systemd[1661]: Reached target sockets.target - Sockets. Jul 10 08:06:17.632201 systemd[1661]: Reached target basic.target - Basic System. Jul 10 08:06:17.632421 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 08:06:17.634058 systemd[1661]: Reached target default.target - Main User Target. Jul 10 08:06:17.634152 systemd[1661]: Startup finished in 239ms. Jul 10 08:06:17.642426 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 08:06:18.071641 systemd[1]: Started sshd@1-172.24.4.5:22-172.24.4.1:50074.service - OpenSSH per-connection server daemon (172.24.4.1:50074). Jul 10 08:06:18.817052 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:18.822042 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:19.265542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:06:19.275608 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 08:06:19.526757 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 50074 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:19.529860 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:19.546086 systemd-logind[1499]: New session 2 of user core. Jul 10 08:06:19.555447 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 08:06:20.192014 sshd[1683]: Connection closed by 172.24.4.1 port 50074 Jul 10 08:06:20.193183 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:20.210494 systemd[1]: sshd@1-172.24.4.5:22-172.24.4.1:50074.service: Deactivated successfully. Jul 10 08:06:20.214420 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 08:06:20.218541 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. Jul 10 08:06:20.224828 systemd[1]: Started sshd@2-172.24.4.5:22-172.24.4.1:50078.service - OpenSSH per-connection server daemon (172.24.4.1:50078). Jul 10 08:06:20.227904 systemd-logind[1499]: Removed session 2. Jul 10 08:06:20.316183 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 08:06:20.321631 login[1609]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 08:06:20.329534 systemd-logind[1499]: New session 3 of user core. Jul 10 08:06:20.336354 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 08:06:20.342541 systemd-logind[1499]: New session 4 of user core. Jul 10 08:06:20.343521 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 08:06:21.453698 kubelet[1682]: E0710 08:06:21.453527 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 08:06:21.460457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 08:06:21.460847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 08:06:21.462518 systemd[1]: kubelet.service: Consumed 2.508s CPU time, 266.5M memory peak. Jul 10 08:06:21.669549 sshd[1694]: Accepted publickey for core from 172.24.4.1 port 50078 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:21.674682 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:21.691068 systemd-logind[1499]: New session 5 of user core. Jul 10 08:06:21.705622 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 08:06:22.294545 sshd[1727]: Connection closed by 172.24.4.1 port 50078 Jul 10 08:06:22.295758 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:22.304644 systemd[1]: sshd@2-172.24.4.5:22-172.24.4.1:50078.service: Deactivated successfully. Jul 10 08:06:22.309678 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 08:06:22.312898 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. Jul 10 08:06:22.317088 systemd-logind[1499]: Removed session 5. Jul 10 08:06:22.851023 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:22.859002 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 10 08:06:22.868793 coreos-metadata[1479]: Jul 10 08:06:22.868 WARN failed to locate config-drive, using the metadata service API instead Jul 10 08:06:22.879004 coreos-metadata[1564]: Jul 10 08:06:22.878 WARN failed to locate config-drive, using the metadata service API instead Jul 10 08:06:22.922372 coreos-metadata[1479]: Jul 10 08:06:22.922 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 10 08:06:22.924568 coreos-metadata[1564]: Jul 10 08:06:22.924 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 10 08:06:23.216578 coreos-metadata[1479]: Jul 10 08:06:23.216 INFO Fetch successful Jul 10 08:06:23.216578 coreos-metadata[1479]: Jul 10 08:06:23.216 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 10 08:06:23.233540 coreos-metadata[1479]: Jul 10 08:06:23.233 INFO Fetch successful Jul 10 08:06:23.233899 coreos-metadata[1479]: Jul 10 08:06:23.233 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 10 08:06:23.248097 coreos-metadata[1564]: Jul 10 08:06:23.247 INFO Fetch successful Jul 10 08:06:23.248097 coreos-metadata[1564]: Jul 10 08:06:23.248 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 10 08:06:23.255275 coreos-metadata[1479]: Jul 10 08:06:23.255 INFO Fetch successful Jul 10 08:06:23.255709 coreos-metadata[1479]: Jul 10 08:06:23.255 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 10 08:06:23.263812 coreos-metadata[1564]: Jul 10 08:06:23.263 INFO Fetch successful Jul 10 08:06:23.269759 coreos-metadata[1479]: Jul 10 08:06:23.269 INFO Fetch successful Jul 10 08:06:23.270248 coreos-metadata[1479]: Jul 10 08:06:23.270 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 10 08:06:23.270754 unknown[1564]: wrote ssh authorized keys file for user: core Jul 10 08:06:23.286086 coreos-metadata[1479]: Jul 10 08:06:23.285 INFO Fetch successful Jul 10 08:06:23.286351 coreos-metadata[1479]: Jul 10 08:06:23.286 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 10 08:06:23.301102 coreos-metadata[1479]: Jul 10 08:06:23.301 INFO Fetch successful Jul 10 08:06:23.333999 update-ssh-keys[1737]: Updated "/home/core/.ssh/authorized_keys" Jul 10 08:06:23.338419 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 10 08:06:23.346425 systemd[1]: Finished sshkeys.service. Jul 10 08:06:23.358688 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 08:06:23.360568 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 08:06:23.362514 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 08:06:23.363486 systemd[1]: Startup finished in 3.909s (kernel) + 22.039s (initrd) + 12.671s (userspace) = 38.620s. Jul 10 08:06:31.662344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 08:06:31.667095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:06:32.156760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:06:32.178565 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 08:06:32.289608 kubelet[1753]: E0710 08:06:32.289495 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 08:06:32.309619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 08:06:32.311003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 08:06:32.312561 systemd[1]: kubelet.service: Consumed 412ms CPU time, 108.3M memory peak. Jul 10 08:06:32.317730 systemd[1]: Started sshd@3-172.24.4.5:22-172.24.4.1:43202.service - OpenSSH per-connection server daemon (172.24.4.1:43202). Jul 10 08:06:33.421094 sshd[1761]: Accepted publickey for core from 172.24.4.1 port 43202 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:33.423189 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:33.437096 systemd-logind[1499]: New session 6 of user core. Jul 10 08:06:33.448411 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 08:06:34.051836 sshd[1764]: Connection closed by 172.24.4.1 port 43202 Jul 10 08:06:34.053120 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:34.071135 systemd[1]: sshd@3-172.24.4.5:22-172.24.4.1:43202.service: Deactivated successfully. Jul 10 08:06:34.076496 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 08:06:34.080452 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. Jul 10 08:06:34.086496 systemd[1]: Started sshd@4-172.24.4.5:22-172.24.4.1:34028.service - OpenSSH per-connection server daemon (172.24.4.1:34028). Jul 10 08:06:34.089832 systemd-logind[1499]: Removed session 6. Jul 10 08:06:35.413787 sshd[1770]: Accepted publickey for core from 172.24.4.1 port 34028 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:35.418178 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:35.433075 systemd-logind[1499]: New session 7 of user core. Jul 10 08:06:35.441400 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 08:06:36.052025 sshd[1773]: Connection closed by 172.24.4.1 port 34028 Jul 10 08:06:36.054472 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:36.070427 systemd[1]: sshd@4-172.24.4.5:22-172.24.4.1:34028.service: Deactivated successfully. Jul 10 08:06:36.075254 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 08:06:36.077519 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. Jul 10 08:06:36.084316 systemd[1]: Started sshd@5-172.24.4.5:22-172.24.4.1:34036.service - OpenSSH per-connection server daemon (172.24.4.1:34036). Jul 10 08:06:36.086862 systemd-logind[1499]: Removed session 7. Jul 10 08:06:37.435316 sshd[1779]: Accepted publickey for core from 172.24.4.1 port 34036 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:37.440844 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:37.461335 systemd-logind[1499]: New session 8 of user core. Jul 10 08:06:37.474417 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 08:06:38.052009 sshd[1782]: Connection closed by 172.24.4.1 port 34036 Jul 10 08:06:38.052816 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:38.074266 systemd[1]: sshd@5-172.24.4.5:22-172.24.4.1:34036.service: Deactivated successfully. Jul 10 08:06:38.078639 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 08:06:38.081115 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. Jul 10 08:06:38.088349 systemd[1]: Started sshd@6-172.24.4.5:22-172.24.4.1:34040.service - OpenSSH per-connection server daemon (172.24.4.1:34040). Jul 10 08:06:38.091168 systemd-logind[1499]: Removed session 8. Jul 10 08:06:39.582546 sshd[1788]: Accepted publickey for core from 172.24.4.1 port 34040 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:39.587672 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:39.606070 systemd-logind[1499]: New session 9 of user core. Jul 10 08:06:39.614335 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 08:06:40.095080 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 08:06:40.095730 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 08:06:40.119640 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 10 08:06:40.334008 sshd[1791]: Connection closed by 172.24.4.1 port 34040 Jul 10 08:06:40.334771 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:40.353653 systemd[1]: sshd@6-172.24.4.5:22-172.24.4.1:34040.service: Deactivated successfully. Jul 10 08:06:40.358164 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 08:06:40.360867 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. Jul 10 08:06:40.367499 systemd[1]: Started sshd@7-172.24.4.5:22-172.24.4.1:34054.service - OpenSSH per-connection server daemon (172.24.4.1:34054). Jul 10 08:06:40.370197 systemd-logind[1499]: Removed session 9. Jul 10 08:06:41.550222 sshd[1798]: Accepted publickey for core from 172.24.4.1 port 34054 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:41.553599 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:41.566133 systemd-logind[1499]: New session 10 of user core. Jul 10 08:06:41.579377 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 08:06:42.046042 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 08:06:42.046726 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 08:06:42.080297 sudo[1803]: pam_unix(sudo:session): session closed for user root Jul 10 08:06:42.094262 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 08:06:42.094940 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 08:06:42.119914 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 08:06:42.214590 augenrules[1825]: No rules Jul 10 08:06:42.217445 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 08:06:42.218442 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 08:06:42.222201 sudo[1802]: pam_unix(sudo:session): session closed for user root Jul 10 08:06:42.375362 sshd[1801]: Connection closed by 172.24.4.1 port 34054 Jul 10 08:06:42.378275 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jul 10 08:06:42.394639 systemd[1]: sshd@7-172.24.4.5:22-172.24.4.1:34054.service: Deactivated successfully. Jul 10 08:06:42.398680 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 08:06:42.402135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 08:06:42.405488 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. Jul 10 08:06:42.410220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:06:42.413488 systemd[1]: Started sshd@8-172.24.4.5:22-172.24.4.1:34068.service - OpenSSH per-connection server daemon (172.24.4.1:34068). Jul 10 08:06:42.419303 systemd-logind[1499]: Removed session 10. Jul 10 08:06:42.757744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:06:42.768481 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 08:06:43.068720 kubelet[1844]: E0710 08:06:43.068436 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 08:06:43.071653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 08:06:43.071863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 08:06:43.072506 systemd[1]: kubelet.service: Consumed 371ms CPU time, 108.1M memory peak. Jul 10 08:06:43.874864 sshd[1835]: Accepted publickey for core from 172.24.4.1 port 34068 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:06:43.879233 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:06:43.896074 systemd-logind[1499]: New session 11 of user core. Jul 10 08:06:43.910368 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 08:06:44.318731 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 08:06:44.320622 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 08:06:45.420722 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 08:06:45.448027 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 08:06:46.781168 dockerd[1872]: time="2025-07-10T08:06:46.780872632Z" level=info msg="Starting up" Jul 10 08:06:46.786487 dockerd[1872]: time="2025-07-10T08:06:46.786315600Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 08:06:46.838796 dockerd[1872]: time="2025-07-10T08:06:46.838757816Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 10 08:06:47.046353 systemd-timesyncd[1439]: Contacted time server 23.142.248.8:123 (2.flatcar.pool.ntp.org). Jul 10 08:06:47.047861 systemd-timesyncd[1439]: Initial clock synchronization to Thu 2025-07-10 08:06:47.227828 UTC. Jul 10 08:06:47.428337 dockerd[1872]: time="2025-07-10T08:06:47.428140042Z" level=info msg="Loading containers: start." Jul 10 08:06:47.655091 kernel: Initializing XFRM netlink socket Jul 10 08:06:48.501875 systemd-networkd[1456]: docker0: Link UP Jul 10 08:06:48.512783 dockerd[1872]: time="2025-07-10T08:06:48.512575047Z" level=info msg="Loading containers: done." Jul 10 08:06:48.537623 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1198540933-merged.mount: Deactivated successfully. Jul 10 08:06:48.548312 dockerd[1872]: time="2025-07-10T08:06:48.548230086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 08:06:48.548474 dockerd[1872]: time="2025-07-10T08:06:48.548441381Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 10 08:06:48.548662 dockerd[1872]: time="2025-07-10T08:06:48.548625581Z" level=info msg="Initializing buildkit" Jul 10 08:06:48.609424 dockerd[1872]: time="2025-07-10T08:06:48.609361331Z" level=info msg="Completed buildkit initialization" Jul 10 08:06:48.625686 dockerd[1872]: time="2025-07-10T08:06:48.625576753Z" level=info msg="Daemon has completed initialization" Jul 10 08:06:48.626129 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 08:06:48.628212 dockerd[1872]: time="2025-07-10T08:06:48.626181863Z" level=info msg="API listen on /run/docker.sock" Jul 10 08:06:50.491607 containerd[1541]: time="2025-07-10T08:06:50.491133087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 08:06:51.598724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725237092.mount: Deactivated successfully. Jul 10 08:06:53.161763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 08:06:53.166157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:06:53.424196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:06:53.434237 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 08:06:53.688698 kubelet[2143]: E0710 08:06:53.688514 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 08:06:53.694709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 08:06:53.694865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 08:06:53.695625 systemd[1]: kubelet.service: Consumed 303ms CPU time, 108.3M memory peak. Jul 10 08:06:54.194662 containerd[1541]: time="2025-07-10T08:06:54.194535393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:54.197527 containerd[1541]: time="2025-07-10T08:06:54.197414166Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 10 08:06:54.199501 containerd[1541]: time="2025-07-10T08:06:54.199318354Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:54.210416 containerd[1541]: time="2025-07-10T08:06:54.210223323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:54.215345 containerd[1541]: time="2025-07-10T08:06:54.215173294Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 3.723535574s" Jul 10 08:06:54.215694 containerd[1541]: time="2025-07-10T08:06:54.215434786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 08:06:54.227519 containerd[1541]: time="2025-07-10T08:06:54.227378265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 08:06:56.417358 containerd[1541]: time="2025-07-10T08:06:56.416746909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:56.423953 containerd[1541]: time="2025-07-10T08:06:56.423844227Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 10 08:06:56.462440 containerd[1541]: time="2025-07-10T08:06:56.462196496Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:56.499829 containerd[1541]: time="2025-07-10T08:06:56.499646144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:56.504609 containerd[1541]: time="2025-07-10T08:06:56.504469537Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.276920716s" Jul 10 08:06:56.505241 containerd[1541]: time="2025-07-10T08:06:56.505115880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 08:06:56.510076 containerd[1541]: time="2025-07-10T08:06:56.509872601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 08:06:58.439272 containerd[1541]: time="2025-07-10T08:06:58.439060036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:58.441040 containerd[1541]: time="2025-07-10T08:06:58.440707413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 10 08:06:58.442397 containerd[1541]: time="2025-07-10T08:06:58.442362691Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:58.446375 containerd[1541]: time="2025-07-10T08:06:58.446318716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:06:58.449285 containerd[1541]: time="2025-07-10T08:06:58.449225614Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.939268409s" Jul 10 08:06:58.449285 containerd[1541]: time="2025-07-10T08:06:58.449270939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 08:06:58.453484 containerd[1541]: time="2025-07-10T08:06:58.453281853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 08:06:59.752122 update_engine[1500]: I20250710 08:06:59.751350 1500 update_attempter.cc:509] Updating boot flags... Jul 10 08:07:00.098461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977843353.mount: Deactivated successfully. Jul 10 08:07:00.739986 containerd[1541]: time="2025-07-10T08:07:00.739740889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:00.742157 containerd[1541]: time="2025-07-10T08:07:00.741977380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 10 08:07:00.743275 containerd[1541]: time="2025-07-10T08:07:00.743223458Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:00.747303 containerd[1541]: time="2025-07-10T08:07:00.747216856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:00.747889 containerd[1541]: time="2025-07-10T08:07:00.747855565Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.294493519s" Jul 10 08:07:00.747997 containerd[1541]: time="2025-07-10T08:07:00.747977721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 08:07:00.750484 containerd[1541]: time="2025-07-10T08:07:00.750047460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 08:07:01.524305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832021569.mount: Deactivated successfully. Jul 10 08:07:03.053866 containerd[1541]: time="2025-07-10T08:07:03.053774948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:03.055657 containerd[1541]: time="2025-07-10T08:07:03.055356663Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 10 08:07:03.057083 containerd[1541]: time="2025-07-10T08:07:03.057037668Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:03.061447 containerd[1541]: time="2025-07-10T08:07:03.061375915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:03.065966 containerd[1541]: time="2025-07-10T08:07:03.065844064Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.315738506s" Jul 10 08:07:03.065966 containerd[1541]: time="2025-07-10T08:07:03.065883760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 08:07:03.069126 containerd[1541]: time="2025-07-10T08:07:03.069083690Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 08:07:03.697712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173990690.mount: Deactivated successfully. Jul 10 08:07:03.713045 containerd[1541]: time="2025-07-10T08:07:03.711743031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 08:07:03.715156 containerd[1541]: time="2025-07-10T08:07:03.715097313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 08:07:03.716635 containerd[1541]: time="2025-07-10T08:07:03.716563377Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 08:07:03.724496 containerd[1541]: time="2025-07-10T08:07:03.724373647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 08:07:03.728009 containerd[1541]: time="2025-07-10T08:07:03.727883879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 658.741319ms" Jul 10 08:07:03.728365 containerd[1541]: time="2025-07-10T08:07:03.728316534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 08:07:03.733727 containerd[1541]: time="2025-07-10T08:07:03.733421876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 08:07:03.912132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 08:07:03.920237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:07:04.685712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:04.701767 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 08:07:04.791876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214537342.mount: Deactivated successfully. Jul 10 08:07:04.825249 kubelet[2248]: E0710 08:07:04.825174 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 08:07:04.828220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 08:07:04.828710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 08:07:04.829674 systemd[1]: kubelet.service: Consumed 615ms CPU time, 110.4M memory peak. Jul 10 08:07:09.357020 containerd[1541]: time="2025-07-10T08:07:09.356597031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:09.363376 containerd[1541]: time="2025-07-10T08:07:09.360040394Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 10 08:07:09.367002 containerd[1541]: time="2025-07-10T08:07:09.364816591Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:09.374887 containerd[1541]: time="2025-07-10T08:07:09.374788828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:09.378261 containerd[1541]: time="2025-07-10T08:07:09.378179209Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.644667125s" Jul 10 08:07:09.378432 containerd[1541]: time="2025-07-10T08:07:09.378325594Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 08:07:13.891898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:13.894522 systemd[1]: kubelet.service: Consumed 615ms CPU time, 110.4M memory peak. Jul 10 08:07:13.903258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:07:13.978779 systemd[1]: Reload requested from client PID 2337 ('systemctl') (unit session-11.scope)... Jul 10 08:07:13.979119 systemd[1]: Reloading... Jul 10 08:07:14.123065 zram_generator::config[2394]: No configuration found. Jul 10 08:07:14.253878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 08:07:14.427249 systemd[1]: Reloading finished in 447 ms. Jul 10 08:07:14.996466 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 08:07:14.996669 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 08:07:14.997337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:14.997444 systemd[1]: kubelet.service: Consumed 291ms CPU time, 92.7M memory peak. Jul 10 08:07:15.005660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:07:15.858234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:15.880627 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 08:07:15.999532 kubelet[2446]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 08:07:16.002515 kubelet[2446]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 08:07:16.002515 kubelet[2446]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 08:07:16.002515 kubelet[2446]: I0710 08:07:16.001973 2446 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 08:07:16.636002 kubelet[2446]: I0710 08:07:16.634683 2446 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 08:07:16.636002 kubelet[2446]: I0710 08:07:16.635043 2446 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 08:07:16.637815 kubelet[2446]: I0710 08:07:16.637749 2446 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 08:07:16.755037 kubelet[2446]: E0710 08:07:16.754906 2446 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:16.764121 kubelet[2446]: I0710 08:07:16.764066 2446 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 08:07:16.807730 kubelet[2446]: I0710 08:07:16.807663 2446 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 08:07:16.818823 kubelet[2446]: I0710 08:07:16.818729 2446 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 08:07:16.821711 kubelet[2446]: I0710 08:07:16.821536 2446 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 08:07:16.822221 kubelet[2446]: I0710 08:07:16.821613 2446 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4391-0-0-n-29a01ddc69.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 08:07:16.823186 kubelet[2446]: I0710 08:07:16.822294 2446 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 08:07:16.823186 kubelet[2446]: I0710 08:07:16.822319 2446 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 08:07:16.823186 kubelet[2446]: I0710 08:07:16.822785 2446 state_mem.go:36] "Initialized new in-memory state store" Jul 10 08:07:16.828553 kubelet[2446]: I0710 08:07:16.828462 2446 kubelet.go:446] "Attempting to sync node with API server" Jul 10 08:07:16.828553 kubelet[2446]: I0710 08:07:16.828543 2446 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 08:07:16.828813 kubelet[2446]: I0710 08:07:16.828574 2446 kubelet.go:352] "Adding apiserver pod source" Jul 10 08:07:16.828813 kubelet[2446]: I0710 08:07:16.828636 2446 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 08:07:16.851071 kubelet[2446]: W0710 08:07:16.850862 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-29a01ddc69.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:16.851432 kubelet[2446]: E0710 08:07:16.851380 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4391-0-0-n-29a01ddc69.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:16.851826 kubelet[2446]: W0710 08:07:16.851751 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:16.852151 kubelet[2446]: E0710 08:07:16.852095 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:16.852741 kubelet[2446]: I0710 08:07:16.852698 2446 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 08:07:16.854770 kubelet[2446]: I0710 08:07:16.854728 2446 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 08:07:16.858007 kubelet[2446]: W0710 08:07:16.857624 2446 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 08:07:16.865221 kubelet[2446]: I0710 08:07:16.865176 2446 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 08:07:16.865607 kubelet[2446]: I0710 08:07:16.865574 2446 server.go:1287] "Started kubelet" Jul 10 08:07:16.872052 kubelet[2446]: I0710 08:07:16.872011 2446 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 08:07:16.878050 kubelet[2446]: E0710 08:07:16.873433 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.5:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4391-0-0-n-29a01ddc69.novalocal.1850d559d5f360b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4391-0-0-n-29a01ddc69.novalocal,UID:ci-4391-0-0-n-29a01ddc69.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:07:16.865425584 +0000 UTC m=+0.972004013,LastTimestamp:2025-07-10 08:07:16.865425584 +0000 UTC m=+0.972004013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:07:16.878590 kubelet[2446]: I0710 08:07:16.878504 2446 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 08:07:16.879371 kubelet[2446]: I0710 08:07:16.879324 2446 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 08:07:16.879842 kubelet[2446]: E0710 08:07:16.879795 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" Jul 10 08:07:16.881136 kubelet[2446]: I0710 08:07:16.881091 2446 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 08:07:16.882385 kubelet[2446]: I0710 08:07:16.882342 2446 reconciler.go:26] "Reconciler: start to sync state" Jul 10 08:07:16.882884 kubelet[2446]: E0710 08:07:16.882818 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-29a01ddc69.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="200ms" Jul 10 08:07:16.887587 kubelet[2446]: I0710 08:07:16.884840 2446 server.go:479] "Adding debug handlers to kubelet server" Jul 10 08:07:16.900101 kubelet[2446]: W0710 08:07:16.899938 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:16.903395 kubelet[2446]: E0710 08:07:16.903300 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:16.903537 kubelet[2446]: I0710 08:07:16.884993 2446 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 08:07:16.904582 kubelet[2446]: I0710 08:07:16.904536 2446 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 08:07:16.904778 kubelet[2446]: I0710 08:07:16.885969 2446 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 08:07:16.905261 kubelet[2446]: I0710 08:07:16.900614 2446 factory.go:221] Registration of the containerd container factory successfully Jul 10 08:07:16.905408 kubelet[2446]: I0710 08:07:16.905309 2446 factory.go:221] Registration of the systemd container factory successfully Jul 10 08:07:16.905868 kubelet[2446]: I0710 08:07:16.905827 2446 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 08:07:16.908434 kubelet[2446]: E0710 08:07:16.908372 2446 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 08:07:16.932379 kubelet[2446]: I0710 08:07:16.932224 2446 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 08:07:16.932379 kubelet[2446]: I0710 08:07:16.932247 2446 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 08:07:16.932379 kubelet[2446]: I0710 08:07:16.932289 2446 state_mem.go:36] "Initialized new in-memory state store" Jul 10 08:07:16.936491 kubelet[2446]: I0710 08:07:16.936365 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 08:07:16.938129 kubelet[2446]: I0710 08:07:16.938111 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 08:07:16.938329 kubelet[2446]: I0710 08:07:16.938316 2446 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 08:07:16.938464 kubelet[2446]: I0710 08:07:16.938449 2446 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 08:07:16.938605 kubelet[2446]: I0710 08:07:16.938591 2446 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 08:07:16.938970 kubelet[2446]: E0710 08:07:16.938749 2446 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 08:07:16.938970 kubelet[2446]: I0710 08:07:16.938504 2446 policy_none.go:49] "None policy: Start" Jul 10 08:07:16.938970 kubelet[2446]: I0710 08:07:16.938848 2446 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 08:07:16.938970 kubelet[2446]: I0710 08:07:16.938906 2446 state_mem.go:35] "Initializing new in-memory state store" Jul 10 08:07:16.945078 kubelet[2446]: W0710 08:07:16.945038 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:16.945258 kubelet[2446]: E0710 08:07:16.945237 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:16.954839 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 08:07:16.968821 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 08:07:16.972765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 08:07:16.980723 kubelet[2446]: E0710 08:07:16.980670 2446 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" Jul 10 08:07:16.981052 kubelet[2446]: I0710 08:07:16.980940 2446 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 08:07:16.981221 kubelet[2446]: I0710 08:07:16.981201 2446 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 08:07:16.981372 kubelet[2446]: I0710 08:07:16.981219 2446 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 08:07:16.982583 kubelet[2446]: I0710 08:07:16.982393 2446 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 08:07:16.984621 kubelet[2446]: E0710 08:07:16.984591 2446 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 08:07:16.984817 kubelet[2446]: E0710 08:07:16.984695 2446 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" Jul 10 08:07:17.085728 systemd[1]: Created slice kubepods-burstable-pod6bd47e81634a1fad90cea695d58949a9.slice - libcontainer container kubepods-burstable-pod6bd47e81634a1fad90cea695d58949a9.slice. Jul 10 08:07:17.095365 kubelet[2446]: I0710 08:07:17.094268 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-ca-certs\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.095365 kubelet[2446]: I0710 08:07:17.094651 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-k8s-certs\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.095365 kubelet[2446]: I0710 08:07:17.094915 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.099076 kubelet[2446]: I0710 08:07:17.095438 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-ca-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.099076 kubelet[2446]: I0710 08:07:17.095619 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-flexvolume-dir\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.099076 kubelet[2446]: I0710 08:07:17.095743 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-k8s-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.099076 kubelet[2446]: E0710 08:07:17.096763 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-29a01ddc69.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="400ms" Jul 10 08:07:17.101250 kubelet[2446]: I0710 08:07:17.101199 2446 kubelet_node_status.go:75] "Attempting to register node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.106384 kubelet[2446]: E0710 08:07:17.106226 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.114817 kubelet[2446]: E0710 08:07:17.114737 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.122198 systemd[1]: Created slice kubepods-burstable-pod38962031e0206f3ff0de22fa27483fe0.slice - libcontainer container kubepods-burstable-pod38962031e0206f3ff0de22fa27483fe0.slice. Jul 10 08:07:17.133034 kubelet[2446]: E0710 08:07:17.131384 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.141311 systemd[1]: Created slice kubepods-burstable-pod8e6a146caca41331ef6aa6523967fb66.slice - libcontainer container kubepods-burstable-pod8e6a146caca41331ef6aa6523967fb66.slice. Jul 10 08:07:17.149049 kubelet[2446]: E0710 08:07:17.148914 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.196764 kubelet[2446]: I0710 08:07:17.196634 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e6a146caca41331ef6aa6523967fb66-kubeconfig\") pod \"kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"8e6a146caca41331ef6aa6523967fb66\") " pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.197256 kubelet[2446]: I0710 08:07:17.196849 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-kubeconfig\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.197256 kubelet[2446]: I0710 08:07:17.196913 2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.283834 kubelet[2446]: E0710 08:07:17.243457 2446 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.5:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4391-0-0-n-29a01ddc69.novalocal.1850d559d5f360b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4391-0-0-n-29a01ddc69.novalocal,UID:ci-4391-0-0-n-29a01ddc69.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:07:16.865425584 +0000 UTC m=+0.972004013,LastTimestamp:2025-07-10 08:07:16.865425584 +0000 UTC m=+0.972004013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:07:17.313028 kubelet[2446]: I0710 08:07:17.312472 2446 kubelet_node_status.go:75] "Attempting to register node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.313721 kubelet[2446]: E0710 08:07:17.313661 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.419780 containerd[1541]: time="2025-07-10T08:07:17.419349414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:6bd47e81634a1fad90cea695d58949a9,Namespace:kube-system,Attempt:0,}" Jul 10 08:07:17.434863 containerd[1541]: time="2025-07-10T08:07:17.434732521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:38962031e0206f3ff0de22fa27483fe0,Namespace:kube-system,Attempt:0,}" Jul 10 08:07:17.451985 containerd[1541]: time="2025-07-10T08:07:17.451851504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:8e6a146caca41331ef6aa6523967fb66,Namespace:kube-system,Attempt:0,}" Jul 10 08:07:17.500213 kubelet[2446]: E0710 08:07:17.500115 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-29a01ddc69.novalocal?timeout=10s\": dial tcp 172.24.4.5:6443: connect: connection refused" interval="800ms" Jul 10 08:07:17.569985 containerd[1541]: time="2025-07-10T08:07:17.569869825Z" level=info msg="connecting to shim 7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003" address="unix:///run/containerd/s/f322bb0bc972f8b303df9afbe86503c33323b5ab93c82cbf4b1822450b40898b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:17.579871 containerd[1541]: time="2025-07-10T08:07:17.579807584Z" level=info msg="connecting to shim b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:17.580972 containerd[1541]: time="2025-07-10T08:07:17.580545447Z" level=info msg="connecting to shim f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:17.713144 systemd[1]: Started cri-containerd-b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2.scope - libcontainer container b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2. Jul 10 08:07:17.721605 systemd[1]: Started cri-containerd-7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003.scope - libcontainer container 7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003. Jul 10 08:07:17.726726 kubelet[2446]: I0710 08:07:17.726643 2446 kubelet_node_status.go:75] "Attempting to register node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.727224 kubelet[2446]: E0710 08:07:17.727172 2446 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.5:6443/api/v1/nodes\": dial tcp 172.24.4.5:6443: connect: connection refused" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:17.750297 systemd[1]: Started cri-containerd-f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2.scope - libcontainer container f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2. Jul 10 08:07:17.833507 containerd[1541]: time="2025-07-10T08:07:17.833408686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:6bd47e81634a1fad90cea695d58949a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003\"" Jul 10 08:07:17.842010 containerd[1541]: time="2025-07-10T08:07:17.840851726Z" level=info msg="CreateContainer within sandbox \"7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 08:07:17.842432 containerd[1541]: time="2025-07-10T08:07:17.842392254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:8e6a146caca41331ef6aa6523967fb66,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\"" Jul 10 08:07:17.846264 containerd[1541]: time="2025-07-10T08:07:17.846156805Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 08:07:17.859349 containerd[1541]: time="2025-07-10T08:07:17.859281588Z" level=info msg="Container 212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:17.859961 containerd[1541]: time="2025-07-10T08:07:17.859853963Z" level=info msg="Container 898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:17.884057 containerd[1541]: time="2025-07-10T08:07:17.883972223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal,Uid:38962031e0206f3ff0de22fa27483fe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\"" Jul 10 08:07:17.888551 containerd[1541]: time="2025-07-10T08:07:17.887665847Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 08:07:17.890938 containerd[1541]: time="2025-07-10T08:07:17.890896425Z" level=info msg="CreateContainer within sandbox \"7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2\"" Jul 10 08:07:17.891863 containerd[1541]: time="2025-07-10T08:07:17.891807094Z" level=info msg="StartContainer for \"212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2\"" Jul 10 08:07:17.892281 containerd[1541]: time="2025-07-10T08:07:17.892036371Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\"" Jul 10 08:07:17.892445 containerd[1541]: time="2025-07-10T08:07:17.892403934Z" level=info msg="StartContainer for \"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\"" Jul 10 08:07:17.895082 containerd[1541]: time="2025-07-10T08:07:17.895044336Z" level=info msg="connecting to shim 212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2" address="unix:///run/containerd/s/f322bb0bc972f8b303df9afbe86503c33323b5ab93c82cbf4b1822450b40898b" protocol=ttrpc version=3 Jul 10 08:07:17.895281 containerd[1541]: time="2025-07-10T08:07:17.895242602Z" level=info msg="connecting to shim 898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" protocol=ttrpc version=3 Jul 10 08:07:17.910851 containerd[1541]: time="2025-07-10T08:07:17.910762820Z" level=info msg="Container 06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:17.922266 systemd[1]: Started cri-containerd-898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31.scope - libcontainer container 898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31. Jul 10 08:07:17.931022 containerd[1541]: time="2025-07-10T08:07:17.930982575Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\"" Jul 10 08:07:17.932502 containerd[1541]: time="2025-07-10T08:07:17.932463213Z" level=info msg="StartContainer for \"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\"" Jul 10 08:07:17.933158 systemd[1]: Started cri-containerd-212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2.scope - libcontainer container 212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2. Jul 10 08:07:17.939629 containerd[1541]: time="2025-07-10T08:07:17.939269106Z" level=info msg="connecting to shim 06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" protocol=ttrpc version=3 Jul 10 08:07:17.990094 systemd[1]: Started cri-containerd-06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c.scope - libcontainer container 06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c. Jul 10 08:07:18.052770 kubelet[2446]: W0710 08:07:18.052635 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:18.052770 kubelet[2446]: E0710 08:07:18.052724 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:18.056750 containerd[1541]: time="2025-07-10T08:07:18.056701226Z" level=info msg="StartContainer for \"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" returns successfully" Jul 10 08:07:18.074187 containerd[1541]: time="2025-07-10T08:07:18.074128123Z" level=info msg="StartContainer for \"212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2\" returns successfully" Jul 10 08:07:18.096019 kubelet[2446]: W0710 08:07:18.095161 2446 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.5:6443: connect: connection refused Jul 10 08:07:18.096019 kubelet[2446]: E0710 08:07:18.095247 2446 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.5:6443: connect: connection refused" logger="UnhandledError" Jul 10 08:07:18.139895 containerd[1541]: time="2025-07-10T08:07:18.139672598Z" level=info msg="StartContainer for \"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" returns successfully" Jul 10 08:07:18.530988 kubelet[2446]: I0710 08:07:18.530247 2446 kubelet_node_status.go:75] "Attempting to register node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:18.981126 kubelet[2446]: E0710 08:07:18.981075 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:18.986249 kubelet[2446]: E0710 08:07:18.986225 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:18.989151 kubelet[2446]: E0710 08:07:18.989117 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:19.998021 kubelet[2446]: E0710 08:07:19.996842 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:19.998021 kubelet[2446]: E0710 08:07:19.997109 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:19.998021 kubelet[2446]: E0710 08:07:19.996846 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.003872 kubelet[2446]: E0710 08:07:21.003797 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.005380 kubelet[2446]: E0710 08:07:21.005359 2446 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.055317 kubelet[2446]: E0710 08:07:21.055257 2446 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.157173 kubelet[2446]: I0710 08:07:21.157121 2446 kubelet_node_status.go:78] "Successfully registered node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.157550 kubelet[2446]: E0710 08:07:21.157412 2446 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4391-0-0-n-29a01ddc69.novalocal\": node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" Jul 10 08:07:21.181397 kubelet[2446]: I0710 08:07:21.181351 2446 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.194900 kubelet[2446]: E0710 08:07:21.194547 2446 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.194900 kubelet[2446]: I0710 08:07:21.194607 2446 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.201103 kubelet[2446]: E0710 08:07:21.201060 2446 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.201103 kubelet[2446]: I0710 08:07:21.201096 2446 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.203460 kubelet[2446]: E0710 08:07:21.203378 2446 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:21.840222 kubelet[2446]: I0710 08:07:21.840144 2446 apiserver.go:52] "Watching apiserver" Jul 10 08:07:21.882119 kubelet[2446]: I0710 08:07:21.881932 2446 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 08:07:22.005405 kubelet[2446]: I0710 08:07:22.005348 2446 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:22.022415 kubelet[2446]: W0710 08:07:22.022060 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:23.990572 kubelet[2446]: I0710 08:07:23.990472 2446 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:24.000042 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-11.scope)... Jul 10 08:07:24.001158 systemd[1]: Reloading... Jul 10 08:07:24.008322 kubelet[2446]: W0710 08:07:24.006582 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:24.179559 zram_generator::config[2757]: No configuration found. Jul 10 08:07:24.407227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 08:07:24.598895 systemd[1]: Reloading finished in 595 ms. Jul 10 08:07:24.633390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:07:24.653702 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 08:07:24.654278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:24.654560 systemd[1]: kubelet.service: Consumed 2.000s CPU time, 131M memory peak. Jul 10 08:07:24.657888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 08:07:25.124875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 08:07:25.137438 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 08:07:25.278995 kubelet[2824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 08:07:25.278995 kubelet[2824]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 08:07:25.278995 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 08:07:25.278995 kubelet[2824]: I0710 08:07:25.278541 2824 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 08:07:25.304383 kubelet[2824]: I0710 08:07:25.304205 2824 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 08:07:25.304383 kubelet[2824]: I0710 08:07:25.304334 2824 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 08:07:25.306346 kubelet[2824]: I0710 08:07:25.306217 2824 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 08:07:25.310205 kubelet[2824]: I0710 08:07:25.310170 2824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 08:07:25.318982 kubelet[2824]: I0710 08:07:25.318026 2824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 08:07:25.340260 kubelet[2824]: I0710 08:07:25.340228 2824 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 08:07:25.348542 kubelet[2824]: I0710 08:07:25.348509 2824 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 08:07:25.349033 kubelet[2824]: I0710 08:07:25.348988 2824 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 08:07:25.349462 kubelet[2824]: I0710 08:07:25.349133 2824 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4391-0-0-n-29a01ddc69.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 08:07:25.349839 kubelet[2824]: I0710 08:07:25.349825 2824 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 08:07:25.349907 kubelet[2824]: I0710 08:07:25.349897 2824 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 08:07:25.350112 kubelet[2824]: I0710 08:07:25.350098 2824 state_mem.go:36] "Initialized new in-memory state store" Jul 10 08:07:25.350408 kubelet[2824]: I0710 08:07:25.350396 2824 kubelet.go:446] "Attempting to sync node with API server" Jul 10 08:07:25.351211 kubelet[2824]: I0710 08:07:25.351193 2824 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 08:07:25.351317 kubelet[2824]: I0710 08:07:25.351305 2824 kubelet.go:352] "Adding apiserver pod source" Jul 10 08:07:25.351918 kubelet[2824]: I0710 08:07:25.351413 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 08:07:25.355226 kubelet[2824]: I0710 08:07:25.355200 2824 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 08:07:25.355722 kubelet[2824]: I0710 08:07:25.355695 2824 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 08:07:25.356288 kubelet[2824]: I0710 08:07:25.356262 2824 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 08:07:25.356334 kubelet[2824]: I0710 08:07:25.356306 2824 server.go:1287] "Started kubelet" Jul 10 08:07:25.360743 kubelet[2824]: I0710 08:07:25.360720 2824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 08:07:25.375643 kubelet[2824]: I0710 08:07:25.375453 2824 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 08:07:25.386593 kubelet[2824]: I0710 08:07:25.386531 2824 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 08:07:25.387397 kubelet[2824]: E0710 08:07:25.387331 2824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4391-0-0-n-29a01ddc69.novalocal\" not found" Jul 10 08:07:25.387397 kubelet[2824]: I0710 08:07:25.387362 2824 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 08:07:25.392803 kubelet[2824]: I0710 08:07:25.392723 2824 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 08:07:25.392803 kubelet[2824]: I0710 08:07:25.381757 2824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 08:07:25.394921 kubelet[2824]: I0710 08:07:25.394842 2824 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 08:07:25.396891 kubelet[2824]: I0710 08:07:25.396398 2824 reconciler.go:26] "Reconciler: start to sync state" Jul 10 08:07:25.404629 kubelet[2824]: I0710 08:07:25.404548 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 08:07:25.409160 kubelet[2824]: I0710 08:07:25.408767 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 08:07:25.409160 kubelet[2824]: I0710 08:07:25.408850 2824 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 08:07:25.409160 kubelet[2824]: I0710 08:07:25.408880 2824 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 08:07:25.409160 kubelet[2824]: I0710 08:07:25.408888 2824 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 08:07:25.411058 kubelet[2824]: E0710 08:07:25.408941 2824 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 08:07:25.411334 kubelet[2824]: I0710 08:07:25.411254 2824 server.go:479] "Adding debug handlers to kubelet server" Jul 10 08:07:25.420228 kubelet[2824]: I0710 08:07:25.420130 2824 factory.go:221] Registration of the systemd container factory successfully Jul 10 08:07:25.424395 kubelet[2824]: I0710 08:07:25.423668 2824 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 08:07:25.424395 kubelet[2824]: E0710 08:07:25.421059 2824 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 08:07:25.430356 kubelet[2824]: I0710 08:07:25.427857 2824 factory.go:221] Registration of the containerd container factory successfully Jul 10 08:07:25.509940 kubelet[2824]: I0710 08:07:25.509881 2824 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 08:07:25.510209 kubelet[2824]: I0710 08:07:25.510186 2824 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 08:07:25.510321 kubelet[2824]: I0710 08:07:25.510309 2824 state_mem.go:36] "Initialized new in-memory state store" Jul 10 08:07:25.511674 kubelet[2824]: I0710 08:07:25.511645 2824 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 08:07:25.511809 kubelet[2824]: I0710 08:07:25.511781 2824 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 08:07:25.511981 kubelet[2824]: E0710 08:07:25.510942 2824 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 08:07:25.512886 kubelet[2824]: I0710 08:07:25.512848 2824 policy_none.go:49] "None policy: Start" Jul 10 08:07:25.513061 kubelet[2824]: I0710 08:07:25.513045 2824 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 08:07:25.513193 kubelet[2824]: I0710 08:07:25.513182 2824 state_mem.go:35] "Initializing new in-memory state store" Jul 10 08:07:25.514200 kubelet[2824]: I0710 08:07:25.514185 2824 state_mem.go:75] "Updated machine memory state" Jul 10 08:07:25.522576 kubelet[2824]: I0710 08:07:25.522536 2824 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 08:07:25.522791 kubelet[2824]: I0710 08:07:25.522765 2824 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 08:07:25.522827 kubelet[2824]: I0710 08:07:25.522783 2824 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 08:07:25.525478 kubelet[2824]: I0710 08:07:25.524482 2824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 08:07:25.526197 kubelet[2824]: E0710 08:07:25.526165 2824 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 08:07:25.650241 kubelet[2824]: I0710 08:07:25.649507 2824 kubelet_node_status.go:75] "Attempting to register node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.687370 kubelet[2824]: I0710 08:07:25.687297 2824 kubelet_node_status.go:124] "Node was previously registered" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.687983 kubelet[2824]: I0710 08:07:25.687935 2824 kubelet_node_status.go:78] "Successfully registered node" node="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.713897 kubelet[2824]: I0710 08:07:25.713844 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.715049 kubelet[2824]: I0710 08:07:25.714242 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.717162 kubelet[2824]: I0710 08:07:25.716048 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.725211 kubelet[2824]: W0710 08:07:25.724258 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:25.725211 kubelet[2824]: E0710 08:07:25.724392 2824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.731654 kubelet[2824]: W0710 08:07:25.731621 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:25.731813 kubelet[2824]: E0710 08:07:25.731690 2824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.731813 kubelet[2824]: W0710 08:07:25.731770 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:25.798348 kubelet[2824]: I0710 08:07:25.798278 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-ca-certs\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798348 kubelet[2824]: I0710 08:07:25.798333 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-flexvolume-dir\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798348 kubelet[2824]: I0710 08:07:25.798360 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-kubeconfig\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798619 kubelet[2824]: I0710 08:07:25.798380 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798619 kubelet[2824]: I0710 08:07:25.798410 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e6a146caca41331ef6aa6523967fb66-kubeconfig\") pod \"kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"8e6a146caca41331ef6aa6523967fb66\") " pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798619 kubelet[2824]: I0710 08:07:25.798434 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-ca-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798619 kubelet[2824]: I0710 08:07:25.798454 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38962031e0206f3ff0de22fa27483fe0-k8s-certs\") pod \"kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"38962031e0206f3ff0de22fa27483fe0\") " pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798747 kubelet[2824]: I0710 08:07:25.798472 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-k8s-certs\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:25.798747 kubelet[2824]: I0710 08:07:25.798502 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd47e81634a1fad90cea695d58949a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\" (UID: \"6bd47e81634a1fad90cea695d58949a9\") " pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:26.375626 kubelet[2824]: I0710 08:07:26.374990 2824 apiserver.go:52] "Watching apiserver" Jul 10 08:07:26.398182 kubelet[2824]: I0710 08:07:26.398020 2824 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 08:07:26.468524 kubelet[2824]: I0710 08:07:26.468091 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:26.525137 kubelet[2824]: I0710 08:07:26.525026 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podStartSLOduration=1.5249855129999998 podStartE2EDuration="1.524985513s" podCreationTimestamp="2025-07-10 08:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:07:26.524713684 +0000 UTC m=+1.375999098" watchObservedRunningTime="2025-07-10 08:07:26.524985513 +0000 UTC m=+1.376270917" Jul 10 08:07:26.528372 kubelet[2824]: W0710 08:07:26.528070 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 08:07:26.528372 kubelet[2824]: E0710 08:07:26.528153 2824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:07:26.564467 kubelet[2824]: I0710 08:07:26.562202 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podStartSLOduration=3.562182328 podStartE2EDuration="3.562182328s" podCreationTimestamp="2025-07-10 08:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:07:26.544232167 +0000 UTC m=+1.395517571" watchObservedRunningTime="2025-07-10 08:07:26.562182328 +0000 UTC m=+1.413467742" Jul 10 08:07:26.577923 kubelet[2824]: I0710 08:07:26.577561 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" podStartSLOduration=4.577543389 podStartE2EDuration="4.577543389s" podCreationTimestamp="2025-07-10 08:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:07:26.564835052 +0000 UTC m=+1.416120456" watchObservedRunningTime="2025-07-10 08:07:26.577543389 +0000 UTC m=+1.428828793" Jul 10 08:07:28.874334 kubelet[2824]: I0710 08:07:28.874277 2824 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 08:07:28.875453 containerd[1541]: time="2025-07-10T08:07:28.875376526Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 08:07:28.877413 kubelet[2824]: I0710 08:07:28.877102 2824 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 08:07:29.765939 systemd[1]: Created slice kubepods-besteffort-pod4c179f34_42ef_4ce7_80a5_4da0bd5bda90.slice - libcontainer container kubepods-besteffort-pod4c179f34_42ef_4ce7_80a5_4da0bd5bda90.slice. Jul 10 08:07:29.848594 kubelet[2824]: I0710 08:07:29.848540 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c179f34-42ef-4ce7-80a5-4da0bd5bda90-xtables-lock\") pod \"kube-proxy-rvkdk\" (UID: \"4c179f34-42ef-4ce7-80a5-4da0bd5bda90\") " pod="kube-system/kube-proxy-rvkdk" Jul 10 08:07:29.848769 kubelet[2824]: I0710 08:07:29.848643 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c179f34-42ef-4ce7-80a5-4da0bd5bda90-lib-modules\") pod \"kube-proxy-rvkdk\" (UID: \"4c179f34-42ef-4ce7-80a5-4da0bd5bda90\") " pod="kube-system/kube-proxy-rvkdk" Jul 10 08:07:29.848769 kubelet[2824]: I0710 08:07:29.848717 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njr45\" (UniqueName: \"kubernetes.io/projected/4c179f34-42ef-4ce7-80a5-4da0bd5bda90-kube-api-access-njr45\") pod \"kube-proxy-rvkdk\" (UID: \"4c179f34-42ef-4ce7-80a5-4da0bd5bda90\") " pod="kube-system/kube-proxy-rvkdk" Jul 10 08:07:29.848844 kubelet[2824]: I0710 08:07:29.848801 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c179f34-42ef-4ce7-80a5-4da0bd5bda90-kube-proxy\") pod \"kube-proxy-rvkdk\" (UID: \"4c179f34-42ef-4ce7-80a5-4da0bd5bda90\") " pod="kube-system/kube-proxy-rvkdk" Jul 10 08:07:30.047691 systemd[1]: Created slice kubepods-besteffort-pod4732e9a2_026f_4c58_a99c_7c0b52405800.slice - libcontainer container kubepods-besteffort-pod4732e9a2_026f_4c58_a99c_7c0b52405800.slice. Jul 10 08:07:30.051132 kubelet[2824]: I0710 08:07:30.051094 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhqrd\" (UniqueName: \"kubernetes.io/projected/4732e9a2-026f-4c58-a99c-7c0b52405800-kube-api-access-hhqrd\") pod \"tigera-operator-747864d56d-wxpk8\" (UID: \"4732e9a2-026f-4c58-a99c-7c0b52405800\") " pod="tigera-operator/tigera-operator-747864d56d-wxpk8" Jul 10 08:07:30.051448 kubelet[2824]: I0710 08:07:30.051190 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4732e9a2-026f-4c58-a99c-7c0b52405800-var-lib-calico\") pod \"tigera-operator-747864d56d-wxpk8\" (UID: \"4732e9a2-026f-4c58-a99c-7c0b52405800\") " pod="tigera-operator/tigera-operator-747864d56d-wxpk8" Jul 10 08:07:30.074941 containerd[1541]: time="2025-07-10T08:07:30.074890753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvkdk,Uid:4c179f34-42ef-4ce7-80a5-4da0bd5bda90,Namespace:kube-system,Attempt:0,}" Jul 10 08:07:30.323256 containerd[1541]: time="2025-07-10T08:07:30.319551000Z" level=info msg="connecting to shim 6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c" address="unix:///run/containerd/s/a6cf510c767314b7e079b95d848c09595c56bac79bf8a767a46270d39bd4d23c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:30.351443 containerd[1541]: time="2025-07-10T08:07:30.351342731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-wxpk8,Uid:4732e9a2-026f-4c58-a99c-7c0b52405800,Namespace:tigera-operator,Attempt:0,}" Jul 10 08:07:30.389630 containerd[1541]: time="2025-07-10T08:07:30.389261840Z" level=info msg="connecting to shim 83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:30.413449 systemd[1]: Started cri-containerd-6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c.scope - libcontainer container 6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c. Jul 10 08:07:30.440097 systemd[1]: Started cri-containerd-83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6.scope - libcontainer container 83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6. Jul 10 08:07:30.487287 containerd[1541]: time="2025-07-10T08:07:30.486995501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvkdk,Uid:4c179f34-42ef-4ce7-80a5-4da0bd5bda90,Namespace:kube-system,Attempt:0,} returns sandbox id \"6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c\"" Jul 10 08:07:30.495798 containerd[1541]: time="2025-07-10T08:07:30.495532287Z" level=info msg="CreateContainer within sandbox \"6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 08:07:30.516240 containerd[1541]: time="2025-07-10T08:07:30.516160051Z" level=info msg="Container f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:30.529481 containerd[1541]: time="2025-07-10T08:07:30.529424291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-wxpk8,Uid:4732e9a2-026f-4c58-a99c-7c0b52405800,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\"" Jul 10 08:07:30.531621 containerd[1541]: time="2025-07-10T08:07:30.531579479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 08:07:30.532105 containerd[1541]: time="2025-07-10T08:07:30.532061262Z" level=info msg="CreateContainer within sandbox \"6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe\"" Jul 10 08:07:30.532846 containerd[1541]: time="2025-07-10T08:07:30.532800672Z" level=info msg="StartContainer for \"f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe\"" Jul 10 08:07:30.535380 containerd[1541]: time="2025-07-10T08:07:30.535350471Z" level=info msg="connecting to shim f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe" address="unix:///run/containerd/s/a6cf510c767314b7e079b95d848c09595c56bac79bf8a767a46270d39bd4d23c" protocol=ttrpc version=3 Jul 10 08:07:30.561240 systemd[1]: Started cri-containerd-f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe.scope - libcontainer container f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe. Jul 10 08:07:30.617833 containerd[1541]: time="2025-07-10T08:07:30.617703595Z" level=info msg="StartContainer for \"f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe\" returns successfully" Jul 10 08:07:32.064192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976454579.mount: Deactivated successfully. Jul 10 08:07:33.141631 kubelet[2824]: I0710 08:07:33.141428 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rvkdk" podStartSLOduration=4.141407588 podStartE2EDuration="4.141407588s" podCreationTimestamp="2025-07-10 08:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:07:31.536393182 +0000 UTC m=+6.387678646" watchObservedRunningTime="2025-07-10 08:07:33.141407588 +0000 UTC m=+7.992693002" Jul 10 08:07:33.407684 containerd[1541]: time="2025-07-10T08:07:33.407535897Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:33.409380 containerd[1541]: time="2025-07-10T08:07:33.409347666Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 08:07:33.411211 containerd[1541]: time="2025-07-10T08:07:33.410596941Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:33.420755 containerd[1541]: time="2025-07-10T08:07:33.420668216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:33.421379 containerd[1541]: time="2025-07-10T08:07:33.421326618Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.889688037s" Jul 10 08:07:33.421522 containerd[1541]: time="2025-07-10T08:07:33.421501106Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 08:07:33.427230 containerd[1541]: time="2025-07-10T08:07:33.426166012Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 08:07:33.445736 containerd[1541]: time="2025-07-10T08:07:33.445001921Z" level=info msg="Container d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:33.459896 containerd[1541]: time="2025-07-10T08:07:33.459822536Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\"" Jul 10 08:07:33.460682 containerd[1541]: time="2025-07-10T08:07:33.460641538Z" level=info msg="StartContainer for \"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\"" Jul 10 08:07:33.462263 containerd[1541]: time="2025-07-10T08:07:33.462221231Z" level=info msg="connecting to shim d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" protocol=ttrpc version=3 Jul 10 08:07:33.608226 systemd[1]: Started cri-containerd-d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02.scope - libcontainer container d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02. Jul 10 08:07:33.660729 containerd[1541]: time="2025-07-10T08:07:33.660689138Z" level=info msg="StartContainer for \"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" returns successfully" Jul 10 08:07:37.793743 kubelet[2824]: I0710 08:07:37.793648 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podStartSLOduration=5.901367617 podStartE2EDuration="8.793629189s" podCreationTimestamp="2025-07-10 08:07:29 +0000 UTC" firstStartedPulling="2025-07-10 08:07:30.530880353 +0000 UTC m=+5.382165757" lastFinishedPulling="2025-07-10 08:07:33.423141925 +0000 UTC m=+8.274427329" observedRunningTime="2025-07-10 08:07:34.609349761 +0000 UTC m=+9.460635235" watchObservedRunningTime="2025-07-10 08:07:37.793629189 +0000 UTC m=+12.644914603" Jul 10 08:07:37.880662 systemd[1]: cri-containerd-d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02.scope: Deactivated successfully. Jul 10 08:07:37.889008 containerd[1541]: time="2025-07-10T08:07:37.888643061Z" level=info msg="received exit event container_id:\"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" id:\"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" pid:3152 exit_status:1 exited_at:{seconds:1752134857 nanos:887992956}" Jul 10 08:07:37.890131 containerd[1541]: time="2025-07-10T08:07:37.890108793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" id:\"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" pid:3152 exit_status:1 exited_at:{seconds:1752134857 nanos:887992956}" Jul 10 08:07:37.927047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02-rootfs.mount: Deactivated successfully. Jul 10 08:07:38.606636 kubelet[2824]: I0710 08:07:38.606586 2824 scope.go:117] "RemoveContainer" containerID="d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02" Jul 10 08:07:38.611280 containerd[1541]: time="2025-07-10T08:07:38.611224083Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 10 08:07:38.637995 containerd[1541]: time="2025-07-10T08:07:38.636518540Z" level=info msg="Container 493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:38.652879 containerd[1541]: time="2025-07-10T08:07:38.652136931Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\"" Jul 10 08:07:38.654911 containerd[1541]: time="2025-07-10T08:07:38.654874678Z" level=info msg="StartContainer for \"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\"" Jul 10 08:07:38.656968 containerd[1541]: time="2025-07-10T08:07:38.656856150Z" level=info msg="connecting to shim 493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" protocol=ttrpc version=3 Jul 10 08:07:38.686171 systemd[1]: Started cri-containerd-493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36.scope - libcontainer container 493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36. Jul 10 08:07:38.764075 containerd[1541]: time="2025-07-10T08:07:38.764023699Z" level=info msg="StartContainer for \"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" returns successfully" Jul 10 08:07:41.822458 sudo[1853]: pam_unix(sudo:session): session closed for user root Jul 10 08:07:42.017033 sshd[1852]: Connection closed by 172.24.4.1 port 34068 Jul 10 08:07:42.021046 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jul 10 08:07:42.033277 systemd[1]: sshd@8-172.24.4.5:22-172.24.4.1:34068.service: Deactivated successfully. Jul 10 08:07:42.039368 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 08:07:42.041168 systemd[1]: session-11.scope: Consumed 8.877s CPU time, 230.5M memory peak. Jul 10 08:07:42.050390 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. Jul 10 08:07:42.055831 systemd-logind[1499]: Removed session 11. Jul 10 08:07:47.986123 systemd[1]: Created slice kubepods-besteffort-pod6f69ca0c_234a_45db_9722_86ec3884fda0.slice - libcontainer container kubepods-besteffort-pod6f69ca0c_234a_45db_9722_86ec3884fda0.slice. Jul 10 08:07:47.997664 kubelet[2824]: I0710 08:07:47.997610 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6wp\" (UniqueName: \"kubernetes.io/projected/6f69ca0c-234a-45db-9722-86ec3884fda0-kube-api-access-vr6wp\") pod \"calico-typha-777fb957cb-mm5jq\" (UID: \"6f69ca0c-234a-45db-9722-86ec3884fda0\") " pod="calico-system/calico-typha-777fb957cb-mm5jq" Jul 10 08:07:47.998254 kubelet[2824]: I0710 08:07:47.998210 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f69ca0c-234a-45db-9722-86ec3884fda0-tigera-ca-bundle\") pod \"calico-typha-777fb957cb-mm5jq\" (UID: \"6f69ca0c-234a-45db-9722-86ec3884fda0\") " pod="calico-system/calico-typha-777fb957cb-mm5jq" Jul 10 08:07:47.998866 kubelet[2824]: I0710 08:07:47.998804 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f69ca0c-234a-45db-9722-86ec3884fda0-typha-certs\") pod \"calico-typha-777fb957cb-mm5jq\" (UID: \"6f69ca0c-234a-45db-9722-86ec3884fda0\") " pod="calico-system/calico-typha-777fb957cb-mm5jq" Jul 10 08:07:48.297363 containerd[1541]: time="2025-07-10T08:07:48.297160864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fb957cb-mm5jq,Uid:6f69ca0c-234a-45db-9722-86ec3884fda0,Namespace:calico-system,Attempt:0,}" Jul 10 08:07:48.355320 systemd[1]: Created slice kubepods-besteffort-pod6802d619_9eb7_46a1_89bc_057f447431f5.slice - libcontainer container kubepods-besteffort-pod6802d619_9eb7_46a1_89bc_057f447431f5.slice. Jul 10 08:07:48.374847 containerd[1541]: time="2025-07-10T08:07:48.374257508Z" level=info msg="connecting to shim 2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025" address="unix:///run/containerd/s/35b647dc3894623b25f1ac6493a5b0fa152e8ccd9877c947a3028536e1e9df7c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:48.403046 kubelet[2824]: I0710 08:07:48.401977 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-cni-bin-dir\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403046 kubelet[2824]: I0710 08:07:48.402055 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-var-run-calico\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403046 kubelet[2824]: I0710 08:07:48.402081 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-flexvol-driver-host\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403046 kubelet[2824]: I0710 08:07:48.402110 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-policysync\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403046 kubelet[2824]: I0710 08:07:48.402133 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czj6\" (UniqueName: \"kubernetes.io/projected/6802d619-9eb7-46a1-89bc-057f447431f5-kube-api-access-4czj6\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403334 kubelet[2824]: I0710 08:07:48.402156 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-cni-net-dir\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403334 kubelet[2824]: I0710 08:07:48.402175 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-var-lib-calico\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403334 kubelet[2824]: I0710 08:07:48.402202 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6802d619-9eb7-46a1-89bc-057f447431f5-node-certs\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403334 kubelet[2824]: I0710 08:07:48.402220 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6802d619-9eb7-46a1-89bc-057f447431f5-tigera-ca-bundle\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403334 kubelet[2824]: I0710 08:07:48.402251 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-lib-modules\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403488 kubelet[2824]: I0710 08:07:48.402270 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-xtables-lock\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.403488 kubelet[2824]: I0710 08:07:48.402301 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6802d619-9eb7-46a1-89bc-057f447431f5-cni-log-dir\") pod \"calico-node-jr5b8\" (UID: \"6802d619-9eb7-46a1-89bc-057f447431f5\") " pod="calico-system/calico-node-jr5b8" Jul 10 08:07:48.444451 systemd[1]: Started cri-containerd-2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025.scope - libcontainer container 2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025. Jul 10 08:07:48.507865 kubelet[2824]: E0710 08:07:48.507822 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.507865 kubelet[2824]: W0710 08:07:48.507848 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.509037 kubelet[2824]: E0710 08:07:48.507916 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.515685 kubelet[2824]: E0710 08:07:48.515630 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.515685 kubelet[2824]: W0710 08:07:48.515657 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.515685 kubelet[2824]: E0710 08:07:48.515679 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.552111 kubelet[2824]: E0710 08:07:48.551925 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.553357 kubelet[2824]: W0710 08:07:48.552425 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.553357 kubelet[2824]: E0710 08:07:48.552478 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.661611 kubelet[2824]: E0710 08:07:48.651597 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:07:48.664159 containerd[1541]: time="2025-07-10T08:07:48.663485857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jr5b8,Uid:6802d619-9eb7-46a1-89bc-057f447431f5,Namespace:calico-system,Attempt:0,}" Jul 10 08:07:48.683111 kubelet[2824]: E0710 08:07:48.683068 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.683111 kubelet[2824]: W0710 08:07:48.683096 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.683111 kubelet[2824]: E0710 08:07:48.683128 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.683600 kubelet[2824]: E0710 08:07:48.683577 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.683600 kubelet[2824]: W0710 08:07:48.683594 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.683911 kubelet[2824]: E0710 08:07:48.683609 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.684790 kubelet[2824]: E0710 08:07:48.684760 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.684790 kubelet[2824]: W0710 08:07:48.684785 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.684790 kubelet[2824]: E0710 08:07:48.684797 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.685718 kubelet[2824]: E0710 08:07:48.685678 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.685718 kubelet[2824]: W0710 08:07:48.685696 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.685718 kubelet[2824]: E0710 08:07:48.685709 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.686926 kubelet[2824]: E0710 08:07:48.686441 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.686926 kubelet[2824]: W0710 08:07:48.686458 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.686926 kubelet[2824]: E0710 08:07:48.686470 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.687509 kubelet[2824]: E0710 08:07:48.687481 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.687509 kubelet[2824]: W0710 08:07:48.687499 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.687509 kubelet[2824]: E0710 08:07:48.687510 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.688383 kubelet[2824]: E0710 08:07:48.688359 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.688383 kubelet[2824]: W0710 08:07:48.688375 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.688383 kubelet[2824]: E0710 08:07:48.688387 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.690054 kubelet[2824]: E0710 08:07:48.690019 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.690054 kubelet[2824]: W0710 08:07:48.690039 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.690054 kubelet[2824]: E0710 08:07:48.690052 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.691638 kubelet[2824]: E0710 08:07:48.691115 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.691638 kubelet[2824]: W0710 08:07:48.691142 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.691638 kubelet[2824]: E0710 08:07:48.691178 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.692090 kubelet[2824]: E0710 08:07:48.692015 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.692387 kubelet[2824]: W0710 08:07:48.692306 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.692387 kubelet[2824]: E0710 08:07:48.692330 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.693063 kubelet[2824]: E0710 08:07:48.693006 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.693063 kubelet[2824]: W0710 08:07:48.693020 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.693063 kubelet[2824]: E0710 08:07:48.693033 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.694253 kubelet[2824]: E0710 08:07:48.694213 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.694253 kubelet[2824]: W0710 08:07:48.694229 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.694480 kubelet[2824]: E0710 08:07:48.694405 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.695996 kubelet[2824]: E0710 08:07:48.695304 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.695996 kubelet[2824]: W0710 08:07:48.695320 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.695996 kubelet[2824]: E0710 08:07:48.695333 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.696302 kubelet[2824]: E0710 08:07:48.696285 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.696462 kubelet[2824]: W0710 08:07:48.696385 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.696462 kubelet[2824]: E0710 08:07:48.696404 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.697043 kubelet[2824]: E0710 08:07:48.696760 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.697323 kubelet[2824]: W0710 08:07:48.697161 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.697323 kubelet[2824]: E0710 08:07:48.697180 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.698744 kubelet[2824]: E0710 08:07:48.698318 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.698744 kubelet[2824]: W0710 08:07:48.698334 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.699104 kubelet[2824]: E0710 08:07:48.698860 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.700277 kubelet[2824]: E0710 08:07:48.700247 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.700277 kubelet[2824]: W0710 08:07:48.700270 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.700363 containerd[1541]: time="2025-07-10T08:07:48.700053307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fb957cb-mm5jq,Uid:6f69ca0c-234a-45db-9722-86ec3884fda0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025\"" Jul 10 08:07:48.700590 kubelet[2824]: E0710 08:07:48.700293 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.700590 kubelet[2824]: E0710 08:07:48.700540 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.700590 kubelet[2824]: W0710 08:07:48.700552 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.700590 kubelet[2824]: E0710 08:07:48.700563 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.702199 kubelet[2824]: E0710 08:07:48.702152 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.702199 kubelet[2824]: W0710 08:07:48.702170 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.702199 kubelet[2824]: E0710 08:07:48.702184 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.702428 kubelet[2824]: E0710 08:07:48.702381 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.702428 kubelet[2824]: W0710 08:07:48.702391 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.702428 kubelet[2824]: E0710 08:07:48.702401 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.706137 kubelet[2824]: E0710 08:07:48.706043 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.706137 kubelet[2824]: W0710 08:07:48.706068 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.706137 kubelet[2824]: E0710 08:07:48.706087 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.707629 containerd[1541]: time="2025-07-10T08:07:48.706260600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 08:07:48.707756 kubelet[2824]: I0710 08:07:48.706585 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/673eda05-b391-4262-883e-c41d9f384dbd-kubelet-dir\") pod \"csi-node-driver-986vz\" (UID: \"673eda05-b391-4262-883e-c41d9f384dbd\") " pod="calico-system/csi-node-driver-986vz" Jul 10 08:07:48.707756 kubelet[2824]: E0710 08:07:48.707491 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.707756 kubelet[2824]: W0710 08:07:48.707510 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.707756 kubelet[2824]: E0710 08:07:48.707593 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.707904 kubelet[2824]: I0710 08:07:48.707815 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/673eda05-b391-4262-883e-c41d9f384dbd-socket-dir\") pod \"csi-node-driver-986vz\" (UID: \"673eda05-b391-4262-883e-c41d9f384dbd\") " pod="calico-system/csi-node-driver-986vz" Jul 10 08:07:48.708911 kubelet[2824]: E0710 08:07:48.708877 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.710014 kubelet[2824]: W0710 08:07:48.708983 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.710101 kubelet[2824]: E0710 08:07:48.710014 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.710101 kubelet[2824]: I0710 08:07:48.710043 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/673eda05-b391-4262-883e-c41d9f384dbd-varrun\") pod \"csi-node-driver-986vz\" (UID: \"673eda05-b391-4262-883e-c41d9f384dbd\") " pod="calico-system/csi-node-driver-986vz" Jul 10 08:07:48.712045 kubelet[2824]: E0710 08:07:48.712012 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.712045 kubelet[2824]: W0710 08:07:48.712037 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.712232 kubelet[2824]: E0710 08:07:48.712054 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.712232 kubelet[2824]: I0710 08:07:48.712081 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whbtr\" (UniqueName: \"kubernetes.io/projected/673eda05-b391-4262-883e-c41d9f384dbd-kube-api-access-whbtr\") pod \"csi-node-driver-986vz\" (UID: \"673eda05-b391-4262-883e-c41d9f384dbd\") " pod="calico-system/csi-node-driver-986vz" Jul 10 08:07:48.713344 kubelet[2824]: E0710 08:07:48.713274 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.713344 kubelet[2824]: W0710 08:07:48.713292 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.713344 kubelet[2824]: E0710 08:07:48.713310 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.713344 kubelet[2824]: I0710 08:07:48.713329 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/673eda05-b391-4262-883e-c41d9f384dbd-registration-dir\") pod \"csi-node-driver-986vz\" (UID: \"673eda05-b391-4262-883e-c41d9f384dbd\") " pod="calico-system/csi-node-driver-986vz" Jul 10 08:07:48.714234 kubelet[2824]: E0710 08:07:48.714209 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.714234 kubelet[2824]: W0710 08:07:48.714231 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.714646 kubelet[2824]: E0710 08:07:48.714245 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.714906 kubelet[2824]: E0710 08:07:48.714657 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.714906 kubelet[2824]: W0710 08:07:48.714887 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.716356 kubelet[2824]: E0710 08:07:48.715410 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.716356 kubelet[2824]: E0710 08:07:48.715997 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.716356 kubelet[2824]: W0710 08:07:48.716009 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.716356 kubelet[2824]: E0710 08:07:48.716218 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.716714 kubelet[2824]: E0710 08:07:48.716466 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.716714 kubelet[2824]: W0710 08:07:48.716477 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.716905 kubelet[2824]: E0710 08:07:48.716773 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.717258 kubelet[2824]: E0710 08:07:48.717033 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.717258 kubelet[2824]: W0710 08:07:48.717044 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.717258 kubelet[2824]: E0710 08:07:48.717125 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.718328 kubelet[2824]: E0710 08:07:48.717426 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.718328 kubelet[2824]: W0710 08:07:48.717441 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.718328 kubelet[2824]: E0710 08:07:48.717754 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.718328 kubelet[2824]: E0710 08:07:48.718108 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.718328 kubelet[2824]: W0710 08:07:48.718119 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.718328 kubelet[2824]: E0710 08:07:48.718260 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.723783 kubelet[2824]: E0710 08:07:48.718637 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.723783 kubelet[2824]: W0710 08:07:48.718648 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.723783 kubelet[2824]: E0710 08:07:48.718659 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.726867 kubelet[2824]: E0710 08:07:48.725164 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.726867 kubelet[2824]: W0710 08:07:48.725197 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.726867 kubelet[2824]: E0710 08:07:48.725218 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.728112 kubelet[2824]: E0710 08:07:48.728066 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.728112 kubelet[2824]: W0710 08:07:48.728089 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.728112 kubelet[2824]: E0710 08:07:48.728106 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.739659 containerd[1541]: time="2025-07-10T08:07:48.739582944Z" level=info msg="connecting to shim 02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b" address="unix:///run/containerd/s/78d0384cded1a545200ed3101d1b6a2bec36d3f994155fc1707ed029633dfd6c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:07:48.787430 systemd[1]: Started cri-containerd-02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b.scope - libcontainer container 02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b. Jul 10 08:07:48.816096 kubelet[2824]: E0710 08:07:48.815083 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.816096 kubelet[2824]: W0710 08:07:48.815223 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.816096 kubelet[2824]: E0710 08:07:48.815250 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.816335 kubelet[2824]: E0710 08:07:48.816124 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.816335 kubelet[2824]: W0710 08:07:48.816136 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.816335 kubelet[2824]: E0710 08:07:48.816154 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.817044 kubelet[2824]: E0710 08:07:48.816937 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.817630 kubelet[2824]: W0710 08:07:48.817208 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.817630 kubelet[2824]: E0710 08:07:48.817757 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.817630 kubelet[2824]: W0710 08:07:48.817770 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.818590 kubelet[2824]: E0710 08:07:48.818121 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.819123 kubelet[2824]: E0710 08:07:48.819091 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.819497 kubelet[2824]: E0710 08:07:48.819355 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.819497 kubelet[2824]: W0710 08:07:48.819371 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.819497 kubelet[2824]: E0710 08:07:48.819388 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.820052 kubelet[2824]: E0710 08:07:48.819889 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.820052 kubelet[2824]: W0710 08:07:48.819906 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.820052 kubelet[2824]: E0710 08:07:48.819923 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.820772 kubelet[2824]: E0710 08:07:48.820689 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.820772 kubelet[2824]: W0710 08:07:48.820703 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.821139 kubelet[2824]: E0710 08:07:48.821010 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.822174 kubelet[2824]: E0710 08:07:48.822132 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.822174 kubelet[2824]: W0710 08:07:48.822151 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.822917 kubelet[2824]: E0710 08:07:48.822298 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.822917 kubelet[2824]: W0710 08:07:48.822314 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.822917 kubelet[2824]: E0710 08:07:48.822429 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.822917 kubelet[2824]: W0710 08:07:48.822438 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.822917 kubelet[2824]: E0710 08:07:48.822484 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.822917 kubelet[2824]: E0710 08:07:48.822549 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.822917 kubelet[2824]: E0710 08:07:48.822616 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.823257 kubelet[2824]: E0710 08:07:48.823038 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.823257 kubelet[2824]: W0710 08:07:48.823049 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.824075 kubelet[2824]: E0710 08:07:48.823625 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.824075 kubelet[2824]: E0710 08:07:48.823737 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.824075 kubelet[2824]: W0710 08:07:48.823747 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.824075 kubelet[2824]: E0710 08:07:48.823892 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.824583 kubelet[2824]: E0710 08:07:48.824352 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.824583 kubelet[2824]: W0710 08:07:48.824363 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.825014 kubelet[2824]: E0710 08:07:48.824988 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.825796 kubelet[2824]: E0710 08:07:48.825378 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.825796 kubelet[2824]: W0710 08:07:48.825393 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.825796 kubelet[2824]: E0710 08:07:48.825599 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.827255 kubelet[2824]: E0710 08:07:48.825845 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.827255 kubelet[2824]: W0710 08:07:48.825867 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.827255 kubelet[2824]: E0710 08:07:48.826066 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.827865 kubelet[2824]: E0710 08:07:48.827730 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.827865 kubelet[2824]: W0710 08:07:48.827751 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.827865 kubelet[2824]: E0710 08:07:48.827817 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.829474 kubelet[2824]: E0710 08:07:48.829445 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.829474 kubelet[2824]: W0710 08:07:48.829467 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.829807 kubelet[2824]: E0710 08:07:48.829747 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.829871 kubelet[2824]: E0710 08:07:48.829801 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.829871 kubelet[2824]: W0710 08:07:48.829826 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.830065 kubelet[2824]: E0710 08:07:48.830004 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.830531 kubelet[2824]: E0710 08:07:48.830498 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.830531 kubelet[2824]: W0710 08:07:48.830513 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.831042 kubelet[2824]: E0710 08:07:48.831014 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.832082 kubelet[2824]: E0710 08:07:48.832058 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.832082 kubelet[2824]: W0710 08:07:48.832076 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.832287 kubelet[2824]: E0710 08:07:48.832249 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.832287 kubelet[2824]: W0710 08:07:48.832260 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.832449 kubelet[2824]: E0710 08:07:48.832378 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.832449 kubelet[2824]: E0710 08:07:48.832413 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.832449 kubelet[2824]: E0710 08:07:48.832423 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.832449 kubelet[2824]: W0710 08:07:48.832433 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.832760 kubelet[2824]: E0710 08:07:48.832461 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.832760 kubelet[2824]: E0710 08:07:48.832692 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.832760 kubelet[2824]: W0710 08:07:48.832702 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.834066 kubelet[2824]: E0710 08:07:48.833990 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.834341 kubelet[2824]: E0710 08:07:48.834243 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.834341 kubelet[2824]: W0710 08:07:48.834260 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.834341 kubelet[2824]: E0710 08:07:48.834271 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.834758 kubelet[2824]: E0710 08:07:48.834670 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.834758 kubelet[2824]: W0710 08:07:48.834686 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.834758 kubelet[2824]: E0710 08:07:48.834709 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:48.865654 containerd[1541]: time="2025-07-10T08:07:48.865417314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jr5b8,Uid:6802d619-9eb7-46a1-89bc-057f447431f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\"" Jul 10 08:07:48.871737 kubelet[2824]: E0710 08:07:48.871697 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:48.872115 kubelet[2824]: W0710 08:07:48.871835 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:48.872115 kubelet[2824]: E0710 08:07:48.871861 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:50.413017 kubelet[2824]: E0710 08:07:50.409810 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:07:50.931402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122321529.mount: Deactivated successfully. Jul 10 08:07:52.414280 kubelet[2824]: E0710 08:07:52.412255 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:07:52.871429 containerd[1541]: time="2025-07-10T08:07:52.871035910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:52.874438 containerd[1541]: time="2025-07-10T08:07:52.874403907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 08:07:52.875668 containerd[1541]: time="2025-07-10T08:07:52.875625105Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:52.880501 containerd[1541]: time="2025-07-10T08:07:52.880456684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:52.882001 containerd[1541]: time="2025-07-10T08:07:52.881919475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.175477739s" Jul 10 08:07:52.882093 containerd[1541]: time="2025-07-10T08:07:52.882008790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 08:07:52.884666 containerd[1541]: time="2025-07-10T08:07:52.884044502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 08:07:52.906532 containerd[1541]: time="2025-07-10T08:07:52.906366115Z" level=info msg="CreateContainer within sandbox \"2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 08:07:52.924531 containerd[1541]: time="2025-07-10T08:07:52.924475211Z" level=info msg="Container d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:52.941314 containerd[1541]: time="2025-07-10T08:07:52.941256350Z" level=info msg="CreateContainer within sandbox \"2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3\"" Jul 10 08:07:52.942016 containerd[1541]: time="2025-07-10T08:07:52.941984614Z" level=info msg="StartContainer for \"d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3\"" Jul 10 08:07:52.943881 containerd[1541]: time="2025-07-10T08:07:52.943837778Z" level=info msg="connecting to shim d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3" address="unix:///run/containerd/s/35b647dc3894623b25f1ac6493a5b0fa152e8ccd9877c947a3028536e1e9df7c" protocol=ttrpc version=3 Jul 10 08:07:53.002331 systemd[1]: Started cri-containerd-d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3.scope - libcontainer container d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3. Jul 10 08:07:53.088254 containerd[1541]: time="2025-07-10T08:07:53.088200957Z" level=info msg="StartContainer for \"d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3\" returns successfully" Jul 10 08:07:53.730232 kubelet[2824]: I0710 08:07:53.728623 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-777fb957cb-mm5jq" podStartSLOduration=2.549068286 podStartE2EDuration="6.72835849s" podCreationTimestamp="2025-07-10 08:07:47 +0000 UTC" firstStartedPulling="2025-07-10 08:07:48.704008098 +0000 UTC m=+23.555293512" lastFinishedPulling="2025-07-10 08:07:52.883298312 +0000 UTC m=+27.734583716" observedRunningTime="2025-07-10 08:07:53.725872704 +0000 UTC m=+28.577158158" watchObservedRunningTime="2025-07-10 08:07:53.72835849 +0000 UTC m=+28.579643934" Jul 10 08:07:53.740455 kubelet[2824]: E0710 08:07:53.740365 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.740740 kubelet[2824]: W0710 08:07:53.740487 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.740740 kubelet[2824]: E0710 08:07:53.740658 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.741454 kubelet[2824]: E0710 08:07:53.741103 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.741454 kubelet[2824]: W0710 08:07:53.741140 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.741454 kubelet[2824]: E0710 08:07:53.741173 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.742228 kubelet[2824]: E0710 08:07:53.741599 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.742228 kubelet[2824]: W0710 08:07:53.741623 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.742228 kubelet[2824]: E0710 08:07:53.741647 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.742884 kubelet[2824]: E0710 08:07:53.742479 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.742884 kubelet[2824]: W0710 08:07:53.742508 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.742884 kubelet[2824]: E0710 08:07:53.742530 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.743860 kubelet[2824]: E0710 08:07:53.743256 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.743860 kubelet[2824]: W0710 08:07:53.743284 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.743860 kubelet[2824]: E0710 08:07:53.743363 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.744733 kubelet[2824]: E0710 08:07:53.744157 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.744733 kubelet[2824]: W0710 08:07:53.744183 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.744733 kubelet[2824]: E0710 08:07:53.744207 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.744733 kubelet[2824]: E0710 08:07:53.744646 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.744733 kubelet[2824]: W0710 08:07:53.744668 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.744733 kubelet[2824]: E0710 08:07:53.744690 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.745285 kubelet[2824]: E0710 08:07:53.744998 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.745285 kubelet[2824]: W0710 08:07:53.745020 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.745285 kubelet[2824]: E0710 08:07:53.745040 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.745726 kubelet[2824]: E0710 08:07:53.745307 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.745726 kubelet[2824]: W0710 08:07:53.745328 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.745726 kubelet[2824]: E0710 08:07:53.745351 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.745726 kubelet[2824]: E0710 08:07:53.745588 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.745726 kubelet[2824]: W0710 08:07:53.745609 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.745726 kubelet[2824]: E0710 08:07:53.745627 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.746536 kubelet[2824]: E0710 08:07:53.745900 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.746536 kubelet[2824]: W0710 08:07:53.745921 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.746536 kubelet[2824]: E0710 08:07:53.745941 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.746536 kubelet[2824]: E0710 08:07:53.746258 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.746536 kubelet[2824]: W0710 08:07:53.746278 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.746536 kubelet[2824]: E0710 08:07:53.746297 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.747238 kubelet[2824]: E0710 08:07:53.746620 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.747238 kubelet[2824]: W0710 08:07:53.746642 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.747238 kubelet[2824]: E0710 08:07:53.746663 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.747238 kubelet[2824]: E0710 08:07:53.746937 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.747238 kubelet[2824]: W0710 08:07:53.747003 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.747238 kubelet[2824]: E0710 08:07:53.747023 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.747944 kubelet[2824]: E0710 08:07:53.747294 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.747944 kubelet[2824]: W0710 08:07:53.747315 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.747944 kubelet[2824]: E0710 08:07:53.747334 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.769616 kubelet[2824]: E0710 08:07:53.769408 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.769616 kubelet[2824]: W0710 08:07:53.769455 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.769616 kubelet[2824]: E0710 08:07:53.769497 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.771693 kubelet[2824]: E0710 08:07:53.771642 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.772375 kubelet[2824]: W0710 08:07:53.772076 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.772375 kubelet[2824]: E0710 08:07:53.772122 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.774219 kubelet[2824]: E0710 08:07:53.774165 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.774219 kubelet[2824]: W0710 08:07:53.774210 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.774512 kubelet[2824]: E0710 08:07:53.774249 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.775106 kubelet[2824]: E0710 08:07:53.775068 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.775106 kubelet[2824]: W0710 08:07:53.775106 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.775293 kubelet[2824]: E0710 08:07:53.775130 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.776671 kubelet[2824]: E0710 08:07:53.776542 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.776671 kubelet[2824]: W0710 08:07:53.776572 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.777409 kubelet[2824]: E0710 08:07:53.777193 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.777409 kubelet[2824]: E0710 08:07:53.777248 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.777409 kubelet[2824]: W0710 08:07:53.777274 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.777409 kubelet[2824]: E0710 08:07:53.777344 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.778152 kubelet[2824]: E0710 08:07:53.777580 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.778152 kubelet[2824]: W0710 08:07:53.777603 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.778152 kubelet[2824]: E0710 08:07:53.777642 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.778152 kubelet[2824]: E0710 08:07:53.778047 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.778152 kubelet[2824]: W0710 08:07:53.778069 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.778152 kubelet[2824]: E0710 08:07:53.778094 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.778655 kubelet[2824]: E0710 08:07:53.778365 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.778655 kubelet[2824]: W0710 08:07:53.778386 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.778655 kubelet[2824]: E0710 08:07:53.778419 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.779882 kubelet[2824]: E0710 08:07:53.779805 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.779882 kubelet[2824]: W0710 08:07:53.779845 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.779882 kubelet[2824]: E0710 08:07:53.779871 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.781197 kubelet[2824]: E0710 08:07:53.781128 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.781197 kubelet[2824]: W0710 08:07:53.781166 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.781416 kubelet[2824]: E0710 08:07:53.781304 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.781831 kubelet[2824]: E0710 08:07:53.781764 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.781831 kubelet[2824]: W0710 08:07:53.781808 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.782166 kubelet[2824]: E0710 08:07:53.781946 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.782435 kubelet[2824]: E0710 08:07:53.782395 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.782435 kubelet[2824]: W0710 08:07:53.782430 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.782641 kubelet[2824]: E0710 08:07:53.782465 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.783057 kubelet[2824]: E0710 08:07:53.783013 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.783057 kubelet[2824]: W0710 08:07:53.783047 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.783342 kubelet[2824]: E0710 08:07:53.783105 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.783728 kubelet[2824]: E0710 08:07:53.783660 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.783728 kubelet[2824]: W0710 08:07:53.783703 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.783904 kubelet[2824]: E0710 08:07:53.783730 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.784884 kubelet[2824]: E0710 08:07:53.784823 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.784884 kubelet[2824]: W0710 08:07:53.784861 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.784884 kubelet[2824]: E0710 08:07:53.784887 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.786041 kubelet[2824]: E0710 08:07:53.785923 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.786041 kubelet[2824]: W0710 08:07:53.785997 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.786356 kubelet[2824]: E0710 08:07:53.786051 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:53.786488 kubelet[2824]: E0710 08:07:53.786418 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:53.786488 kubelet[2824]: W0710 08:07:53.786440 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:53.786488 kubelet[2824]: E0710 08:07:53.786460 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.411339 kubelet[2824]: E0710 08:07:54.410758 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:07:54.696480 kubelet[2824]: I0710 08:07:54.696244 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:07:54.755138 kubelet[2824]: E0710 08:07:54.755072 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.755138 kubelet[2824]: W0710 08:07:54.755124 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.755138 kubelet[2824]: E0710 08:07:54.755166 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.756222 kubelet[2824]: E0710 08:07:54.755551 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.756222 kubelet[2824]: W0710 08:07:54.755575 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.756222 kubelet[2824]: E0710 08:07:54.755599 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.756222 kubelet[2824]: E0710 08:07:54.755927 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.756222 kubelet[2824]: W0710 08:07:54.755991 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.756222 kubelet[2824]: E0710 08:07:54.756018 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.756679 kubelet[2824]: E0710 08:07:54.756617 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.756679 kubelet[2824]: W0710 08:07:54.756642 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.756679 kubelet[2824]: E0710 08:07:54.756672 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.757244 kubelet[2824]: E0710 08:07:54.757110 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.757244 kubelet[2824]: W0710 08:07:54.757135 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.757244 kubelet[2824]: E0710 08:07:54.757158 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.757794 kubelet[2824]: E0710 08:07:54.757479 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.757794 kubelet[2824]: W0710 08:07:54.757503 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.757794 kubelet[2824]: E0710 08:07:54.757650 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.759150 kubelet[2824]: E0710 08:07:54.759105 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.759150 kubelet[2824]: W0710 08:07:54.759147 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.759150 kubelet[2824]: E0710 08:07:54.759174 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.760157 kubelet[2824]: E0710 08:07:54.760072 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.760157 kubelet[2824]: W0710 08:07:54.760110 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.760157 kubelet[2824]: E0710 08:07:54.760136 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.761229 kubelet[2824]: E0710 08:07:54.761184 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.761229 kubelet[2824]: W0710 08:07:54.761222 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.761444 kubelet[2824]: E0710 08:07:54.761248 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.763181 kubelet[2824]: E0710 08:07:54.763138 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.763181 kubelet[2824]: W0710 08:07:54.763177 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.763181 kubelet[2824]: E0710 08:07:54.763203 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.764057 kubelet[2824]: E0710 08:07:54.763570 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.764057 kubelet[2824]: W0710 08:07:54.763597 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.764057 kubelet[2824]: E0710 08:07:54.763619 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.764597 kubelet[2824]: E0710 08:07:54.764172 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.764597 kubelet[2824]: W0710 08:07:54.764197 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.764597 kubelet[2824]: E0710 08:07:54.764220 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.767069 kubelet[2824]: E0710 08:07:54.767004 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.767069 kubelet[2824]: W0710 08:07:54.767045 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.767069 kubelet[2824]: E0710 08:07:54.767070 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.767599 kubelet[2824]: E0710 08:07:54.767561 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.767763 kubelet[2824]: W0710 08:07:54.767633 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.767763 kubelet[2824]: E0710 08:07:54.767662 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.768412 kubelet[2824]: E0710 08:07:54.768343 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.768412 kubelet[2824]: W0710 08:07:54.768387 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.768412 kubelet[2824]: E0710 08:07:54.768412 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.782472 kubelet[2824]: E0710 08:07:54.782416 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.782472 kubelet[2824]: W0710 08:07:54.782466 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.782738 kubelet[2824]: E0710 08:07:54.782500 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.783913 kubelet[2824]: E0710 08:07:54.783047 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.783913 kubelet[2824]: W0710 08:07:54.783083 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.783913 kubelet[2824]: E0710 08:07:54.783107 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.783913 kubelet[2824]: E0710 08:07:54.783523 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.783913 kubelet[2824]: W0710 08:07:54.783547 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.783913 kubelet[2824]: E0710 08:07:54.783609 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.784450 kubelet[2824]: E0710 08:07:54.784038 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.784450 kubelet[2824]: W0710 08:07:54.784063 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.784450 kubelet[2824]: E0710 08:07:54.784118 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.784659 kubelet[2824]: E0710 08:07:54.784585 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.784659 kubelet[2824]: W0710 08:07:54.784611 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.784866 kubelet[2824]: E0710 08:07:54.784670 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.785491 kubelet[2824]: E0710 08:07:54.785057 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.785491 kubelet[2824]: W0710 08:07:54.785095 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.785491 kubelet[2824]: E0710 08:07:54.785118 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.786310 kubelet[2824]: E0710 08:07:54.786264 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.786310 kubelet[2824]: W0710 08:07:54.786299 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.786600 kubelet[2824]: E0710 08:07:54.786326 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.786686 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.787981 kubelet[2824]: W0710 08:07:54.786724 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.786747 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.787198 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.787981 kubelet[2824]: W0710 08:07:54.787222 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.787284 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.787677 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.787981 kubelet[2824]: W0710 08:07:54.787703 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.787981 kubelet[2824]: E0710 08:07:54.787759 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.788753 kubelet[2824]: E0710 08:07:54.788244 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.788753 kubelet[2824]: W0710 08:07:54.788269 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.788753 kubelet[2824]: E0710 08:07:54.788326 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.788753 kubelet[2824]: E0710 08:07:54.788685 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.788753 kubelet[2824]: W0710 08:07:54.788708 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.788753 kubelet[2824]: E0710 08:07:54.788736 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.789521 kubelet[2824]: E0710 08:07:54.789475 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.789521 kubelet[2824]: W0710 08:07:54.789510 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.789718 kubelet[2824]: E0710 08:07:54.789534 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.790627 kubelet[2824]: E0710 08:07:54.789902 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.790627 kubelet[2824]: W0710 08:07:54.789927 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.790627 kubelet[2824]: E0710 08:07:54.790120 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.790627 kubelet[2824]: E0710 08:07:54.790438 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.790627 kubelet[2824]: W0710 08:07:54.790461 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.790627 kubelet[2824]: E0710 08:07:54.790493 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.790842 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.792256 kubelet[2824]: W0710 08:07:54.790865 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.790907 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.791364 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.792256 kubelet[2824]: W0710 08:07:54.791388 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.791414 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.792165 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 08:07:54.792256 kubelet[2824]: W0710 08:07:54.792189 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 08:07:54.792256 kubelet[2824]: E0710 08:07:54.792212 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 08:07:55.124024 containerd[1541]: time="2025-07-10T08:07:55.122371271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:55.125325 containerd[1541]: time="2025-07-10T08:07:55.125270941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 08:07:55.127143 containerd[1541]: time="2025-07-10T08:07:55.127108091Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:55.131623 containerd[1541]: time="2025-07-10T08:07:55.131552602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:07:55.132519 containerd[1541]: time="2025-07-10T08:07:55.132181717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.248083953s" Jul 10 08:07:55.132519 containerd[1541]: time="2025-07-10T08:07:55.132250171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 08:07:55.135807 containerd[1541]: time="2025-07-10T08:07:55.135763216Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 08:07:55.170328 containerd[1541]: time="2025-07-10T08:07:55.170096117Z" level=info msg="Container bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:07:55.176477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067188338.mount: Deactivated successfully. Jul 10 08:07:55.191246 containerd[1541]: time="2025-07-10T08:07:55.191194308Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\"" Jul 10 08:07:55.193104 containerd[1541]: time="2025-07-10T08:07:55.192260496Z" level=info msg="StartContainer for \"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\"" Jul 10 08:07:55.197110 containerd[1541]: time="2025-07-10T08:07:55.197042163Z" level=info msg="connecting to shim bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba" address="unix:///run/containerd/s/78d0384cded1a545200ed3101d1b6a2bec36d3f994155fc1707ed029633dfd6c" protocol=ttrpc version=3 Jul 10 08:07:55.254262 systemd[1]: Started cri-containerd-bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba.scope - libcontainer container bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba. Jul 10 08:07:55.321438 containerd[1541]: time="2025-07-10T08:07:55.321393362Z" level=info msg="StartContainer for \"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\" returns successfully" Jul 10 08:07:55.332889 systemd[1]: cri-containerd-bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba.scope: Deactivated successfully. Jul 10 08:07:55.341124 containerd[1541]: time="2025-07-10T08:07:55.340765540Z" level=info msg="received exit event container_id:\"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\" id:\"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\" pid:3573 exited_at:{seconds:1752134875 nanos:339402053}" Jul 10 08:07:55.341124 containerd[1541]: time="2025-07-10T08:07:55.341045676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\" id:\"bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba\" pid:3573 exited_at:{seconds:1752134875 nanos:339402053}" Jul 10 08:07:55.377520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba-rootfs.mount: Deactivated successfully. Jul 10 08:07:56.410934 kubelet[2824]: E0710 08:07:56.410845 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:07:56.721252 containerd[1541]: time="2025-07-10T08:07:56.720772036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 08:07:58.409848 kubelet[2824]: E0710 08:07:58.409643 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:08:00.413143 kubelet[2824]: E0710 08:08:00.413024 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:08:02.409621 kubelet[2824]: E0710 08:08:02.409578 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:08:02.710082 containerd[1541]: time="2025-07-10T08:08:02.710022870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:02.712251 containerd[1541]: time="2025-07-10T08:08:02.712217269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 08:08:02.713327 containerd[1541]: time="2025-07-10T08:08:02.713291841Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:02.718002 containerd[1541]: time="2025-07-10T08:08:02.717285484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:02.718283 containerd[1541]: time="2025-07-10T08:08:02.718239271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.996329049s" Jul 10 08:08:02.718344 containerd[1541]: time="2025-07-10T08:08:02.718286643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 08:08:02.723022 containerd[1541]: time="2025-07-10T08:08:02.722609183Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 08:08:02.737257 containerd[1541]: time="2025-07-10T08:08:02.737204156Z" level=info msg="Container 31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:02.745388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128632752.mount: Deactivated successfully. Jul 10 08:08:02.757099 containerd[1541]: time="2025-07-10T08:08:02.757034288Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\"" Jul 10 08:08:02.760108 containerd[1541]: time="2025-07-10T08:08:02.759193930Z" level=info msg="StartContainer for \"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\"" Jul 10 08:08:02.761624 containerd[1541]: time="2025-07-10T08:08:02.761595771Z" level=info msg="connecting to shim 31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25" address="unix:///run/containerd/s/78d0384cded1a545200ed3101d1b6a2bec36d3f994155fc1707ed029633dfd6c" protocol=ttrpc version=3 Jul 10 08:08:02.798156 systemd[1]: Started cri-containerd-31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25.scope - libcontainer container 31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25. Jul 10 08:08:02.877381 containerd[1541]: time="2025-07-10T08:08:02.877126104Z" level=info msg="StartContainer for \"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\" returns successfully" Jul 10 08:08:04.410591 kubelet[2824]: E0710 08:08:04.410474 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:08:04.896262 systemd[1]: cri-containerd-31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25.scope: Deactivated successfully. Jul 10 08:08:04.896760 systemd[1]: cri-containerd-31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25.scope: Consumed 1.490s CPU time, 191.5M memory peak, 171.2M written to disk. Jul 10 08:08:04.902504 containerd[1541]: time="2025-07-10T08:08:04.902370337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\" id:\"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\" pid:3633 exited_at:{seconds:1752134884 nanos:901608494}" Jul 10 08:08:04.903482 containerd[1541]: time="2025-07-10T08:08:04.902505839Z" level=info msg="received exit event container_id:\"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\" id:\"31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25\" pid:3633 exited_at:{seconds:1752134884 nanos:901608494}" Jul 10 08:08:04.924117 kubelet[2824]: I0710 08:08:04.923448 2824 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 08:08:04.986501 systemd[1]: Created slice kubepods-burstable-pod64f4becf_45f9_4ea1_b810_64e0105909a1.slice - libcontainer container kubepods-burstable-pod64f4becf_45f9_4ea1_b810_64e0105909a1.slice. Jul 10 08:08:04.989208 kubelet[2824]: W0710 08:08:04.988566 2824 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4391-0-0-n-29a01ddc69.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object Jul 10 08:08:04.989208 kubelet[2824]: E0710 08:08:04.988641 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4391-0-0-n-29a01ddc69.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object" logger="UnhandledError" Jul 10 08:08:04.989208 kubelet[2824]: W0710 08:08:04.988701 2824 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4391-0-0-n-29a01ddc69.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object Jul 10 08:08:04.989208 kubelet[2824]: E0710 08:08:04.988716 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4391-0-0-n-29a01ddc69.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object" logger="UnhandledError" Jul 10 08:08:04.989416 kubelet[2824]: W0710 08:08:04.988770 2824 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4391-0-0-n-29a01ddc69.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object Jul 10 08:08:04.989416 kubelet[2824]: E0710 08:08:04.988783 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4391-0-0-n-29a01ddc69.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object" logger="UnhandledError" Jul 10 08:08:04.992706 kubelet[2824]: W0710 08:08:04.991838 2824 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4391-0-0-n-29a01ddc69.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object Jul 10 08:08:04.992706 kubelet[2824]: E0710 08:08:04.991892 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4391-0-0-n-29a01ddc69.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4391-0-0-n-29a01ddc69.novalocal' and this object" logger="UnhandledError" Jul 10 08:08:04.998904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25-rootfs.mount: Deactivated successfully. Jul 10 08:08:05.019880 systemd[1]: Created slice kubepods-burstable-pod01999a15_b0d2_4afb_bee2_2fe0206967d2.slice - libcontainer container kubepods-burstable-pod01999a15_b0d2_4afb_bee2_2fe0206967d2.slice. Jul 10 08:08:05.028030 systemd[1]: Created slice kubepods-besteffort-podebfcfa0b_3df6_4671_b7ec_2f40d76fc497.slice - libcontainer container kubepods-besteffort-podebfcfa0b_3df6_4671_b7ec_2f40d76fc497.slice. Jul 10 08:08:05.406816 kubelet[2824]: I0710 08:08:05.037687 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f4becf-45f9-4ea1-b810-64e0105909a1-config-volume\") pod \"coredns-668d6bf9bc-dzmhz\" (UID: \"64f4becf-45f9-4ea1-b810-64e0105909a1\") " pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:05.406816 kubelet[2824]: I0710 08:08:05.037731 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-ca-bundle\") pod \"whisker-7bb6c98c9-b2xgw\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:05.406816 kubelet[2824]: I0710 08:08:05.037774 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01999a15-b0d2-4afb-bee2-2fe0206967d2-config-volume\") pod \"coredns-668d6bf9bc-jz74t\" (UID: \"01999a15-b0d2-4afb-bee2-2fe0206967d2\") " pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:05.406816 kubelet[2824]: I0710 08:08:05.037808 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8af92dae-48c2-42f6-af64-9a1c2fb06ebb-calico-apiserver-certs\") pod \"calico-apiserver-75494f88d7-nbhkp\" (UID: \"8af92dae-48c2-42f6-af64-9a1c2fb06ebb\") " pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" Jul 10 08:08:05.406816 kubelet[2824]: I0710 08:08:05.037884 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4ld\" (UniqueName: \"kubernetes.io/projected/64f4becf-45f9-4ea1-b810-64e0105909a1-kube-api-access-fm4ld\") pod \"coredns-668d6bf9bc-dzmhz\" (UID: \"64f4becf-45f9-4ea1-b810-64e0105909a1\") " pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:05.034535 systemd[1]: Created slice kubepods-besteffort-pod7cfbfed2_71d2_4845_87fb_586f7e82aee0.slice - libcontainer container kubepods-besteffort-pod7cfbfed2_71d2_4845_87fb_586f7e82aee0.slice. Jul 10 08:08:05.407597 kubelet[2824]: I0710 08:08:05.037910 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/30537349-9698-4e4c-a82b-357050dfe52b-goldmane-key-pair\") pod \"goldmane-768f4c5c69-gxfms\" (UID: \"30537349-9698-4e4c-a82b-357050dfe52b\") " pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:05.407597 kubelet[2824]: I0710 08:08:05.038269 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-calico-apiserver-certs\") pod \"calico-apiserver-698b6b4cc7-2glfs\" (UID: \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\") " pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" Jul 10 08:08:05.407597 kubelet[2824]: I0710 08:08:05.038330 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30537349-9698-4e4c-a82b-357050dfe52b-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-gxfms\" (UID: \"30537349-9698-4e4c-a82b-357050dfe52b\") " pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:05.407597 kubelet[2824]: I0710 08:08:05.038355 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkwc\" (UniqueName: \"kubernetes.io/projected/7cfbfed2-71d2-4845-87fb-586f7e82aee0-kube-api-access-xtkwc\") pod \"calico-apiserver-698b6b4cc7-6lgxs\" (UID: \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\") " pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" Jul 10 08:08:05.407597 kubelet[2824]: I0710 08:08:05.038409 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxpk\" (UniqueName: \"kubernetes.io/projected/30537349-9698-4e4c-a82b-357050dfe52b-kube-api-access-4nxpk\") pod \"goldmane-768f4c5c69-gxfms\" (UID: \"30537349-9698-4e4c-a82b-357050dfe52b\") " pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:05.041360 systemd[1]: Created slice kubepods-besteffort-pod484dc2e9_1fdd_49a0_8de6_26a6311505ad.slice - libcontainer container kubepods-besteffort-pod484dc2e9_1fdd_49a0_8de6_26a6311505ad.slice. Jul 10 08:08:05.410729 kubelet[2824]: I0710 08:08:05.038444 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzfk4\" (UniqueName: \"kubernetes.io/projected/01999a15-b0d2-4afb-bee2-2fe0206967d2-kube-api-access-zzfk4\") pod \"coredns-668d6bf9bc-jz74t\" (UID: \"01999a15-b0d2-4afb-bee2-2fe0206967d2\") " pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:05.410729 kubelet[2824]: I0710 08:08:05.038478 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30537349-9698-4e4c-a82b-357050dfe52b-config\") pod \"goldmane-768f4c5c69-gxfms\" (UID: \"30537349-9698-4e4c-a82b-357050dfe52b\") " pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:05.410729 kubelet[2824]: I0710 08:08:05.038511 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnnx\" (UniqueName: \"kubernetes.io/projected/8af92dae-48c2-42f6-af64-9a1c2fb06ebb-kube-api-access-9xnnx\") pod \"calico-apiserver-75494f88d7-nbhkp\" (UID: \"8af92dae-48c2-42f6-af64-9a1c2fb06ebb\") " pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" Jul 10 08:08:05.410729 kubelet[2824]: I0710 08:08:05.038537 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-backend-key-pair\") pod \"whisker-7bb6c98c9-b2xgw\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:05.410729 kubelet[2824]: I0710 08:08:05.039089 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5ww\" (UniqueName: \"kubernetes.io/projected/484dc2e9-1fdd-49a0-8de6-26a6311505ad-kube-api-access-kz5ww\") pod \"whisker-7bb6c98c9-b2xgw\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:05.047939 systemd[1]: Created slice kubepods-besteffort-pod2e114b1b_2c96_4efe_a1be_ea79fce4d83b.slice - libcontainer container kubepods-besteffort-pod2e114b1b_2c96_4efe_a1be_ea79fce4d83b.slice. Jul 10 08:08:05.417455 kubelet[2824]: I0710 08:08:05.039497 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfcfa0b-3df6-4671-b7ec-2f40d76fc497-tigera-ca-bundle\") pod \"calico-kube-controllers-6cd68b8fff-mshq4\" (UID: \"ebfcfa0b-3df6-4671-b7ec-2f40d76fc497\") " pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:05.417455 kubelet[2824]: I0710 08:08:05.039536 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjt9p\" (UniqueName: \"kubernetes.io/projected/ebfcfa0b-3df6-4671-b7ec-2f40d76fc497-kube-api-access-qjt9p\") pod \"calico-kube-controllers-6cd68b8fff-mshq4\" (UID: \"ebfcfa0b-3df6-4671-b7ec-2f40d76fc497\") " pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:05.417455 kubelet[2824]: I0710 08:08:05.039590 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cfbfed2-71d2-4845-87fb-586f7e82aee0-calico-apiserver-certs\") pod \"calico-apiserver-698b6b4cc7-6lgxs\" (UID: \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\") " pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" Jul 10 08:08:05.417455 kubelet[2824]: I0710 08:08:05.039612 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qpd\" (UniqueName: \"kubernetes.io/projected/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-kube-api-access-28qpd\") pod \"calico-apiserver-698b6b4cc7-2glfs\" (UID: \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\") " pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" Jul 10 08:08:05.055482 systemd[1]: Created slice kubepods-besteffort-pod8af92dae_48c2_42f6_af64_9a1c2fb06ebb.slice - libcontainer container kubepods-besteffort-pod8af92dae_48c2_42f6_af64_9a1c2fb06ebb.slice. Jul 10 08:08:05.064264 systemd[1]: Created slice kubepods-besteffort-pod30537349_9698_4e4c_a82b_357050dfe52b.slice - libcontainer container kubepods-besteffort-pod30537349_9698_4e4c_a82b_357050dfe52b.slice. Jul 10 08:08:05.730010 containerd[1541]: time="2025-07-10T08:08:05.728847362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:05.740117 containerd[1541]: time="2025-07-10T08:08:05.739939391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:05.749257 containerd[1541]: time="2025-07-10T08:08:05.749184209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:05.752677 containerd[1541]: time="2025-07-10T08:08:05.752569311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:05.964523 containerd[1541]: time="2025-07-10T08:08:05.964289966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 08:08:06.075012 containerd[1541]: time="2025-07-10T08:08:06.074313476Z" level=error msg="Failed to destroy network for sandbox \"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.081006 systemd[1]: run-netns-cni\x2d08bf7dae\x2dd6a7\x2d4663\x2d6b26\x2d1e63347cb559.mount: Deactivated successfully. Jul 10 08:08:06.092113 containerd[1541]: time="2025-07-10T08:08:06.092028164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.092692 kubelet[2824]: E0710 08:08:06.092592 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.092820 kubelet[2824]: E0710 08:08:06.092761 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:06.092892 kubelet[2824]: E0710 08:08:06.092829 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:06.092925 kubelet[2824]: E0710 08:08:06.092904 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-gxfms_calico-system(30537349-9698-4e4c-a82b-357050dfe52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-gxfms_calico-system(30537349-9698-4e4c-a82b-357050dfe52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eca5dc0668f2ed1a917b1b215306f666d5983a22aa06daf6941150134e67f8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-gxfms" podUID="30537349-9698-4e4c-a82b-357050dfe52b" Jul 10 08:08:06.106257 containerd[1541]: time="2025-07-10T08:08:06.106144993Z" level=error msg="Failed to destroy network for sandbox \"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.110313 systemd[1]: run-netns-cni\x2dba699caf\x2de0bc\x2df9b8\x2d9c94\x2dbc96e4ebadc8.mount: Deactivated successfully. Jul 10 08:08:06.113564 containerd[1541]: time="2025-07-10T08:08:06.113430066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.114269 kubelet[2824]: E0710 08:08:06.113752 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.114269 kubelet[2824]: E0710 08:08:06.113867 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:06.114269 kubelet[2824]: E0710 08:08:06.113895 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:06.114788 kubelet[2824]: E0710 08:08:06.114010 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jz74t_kube-system(01999a15-b0d2-4afb-bee2-2fe0206967d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jz74t_kube-system(01999a15-b0d2-4afb-bee2-2fe0206967d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"761f7ae2e4ad4533d77747ec8a603aaec46d7b3226d11c4ac7f1ed097d2c368a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jz74t" podUID="01999a15-b0d2-4afb-bee2-2fe0206967d2" Jul 10 08:08:06.133248 containerd[1541]: time="2025-07-10T08:08:06.133003114Z" level=error msg="Failed to destroy network for sandbox \"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.137279 systemd[1]: run-netns-cni\x2dc7a2ae0e\x2d7886\x2d6125\x2d8b36\x2d2d687443a352.mount: Deactivated successfully. Jul 10 08:08:06.139078 containerd[1541]: time="2025-07-10T08:08:06.138440146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.139863 kubelet[2824]: E0710 08:08:06.138676 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.139863 kubelet[2824]: E0710 08:08:06.138738 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:06.139863 kubelet[2824]: E0710 08:08:06.138762 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:06.140434 kubelet[2824]: E0710 08:08:06.138842 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cd68b8fff-mshq4_calico-system(ebfcfa0b-3df6-4671-b7ec-2f40d76fc497)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cd68b8fff-mshq4_calico-system(ebfcfa0b-3df6-4671-b7ec-2f40d76fc497)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30288c9322249602e5532bed8329379976f6650c280638eea08d9feed9149621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" podUID="ebfcfa0b-3df6-4671-b7ec-2f40d76fc497" Jul 10 08:08:06.152595 containerd[1541]: time="2025-07-10T08:08:06.152456872Z" level=error msg="Failed to destroy network for sandbox \"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.154518 containerd[1541]: time="2025-07-10T08:08:06.154484138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.156176 kubelet[2824]: E0710 08:08:06.156081 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.156176 kubelet[2824]: E0710 08:08:06.156145 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:06.156297 kubelet[2824]: E0710 08:08:06.156184 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:06.156297 kubelet[2824]: E0710 08:08:06.156253 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dzmhz_kube-system(64f4becf-45f9-4ea1-b810-64e0105909a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dzmhz_kube-system(64f4becf-45f9-4ea1-b810-64e0105909a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f14a232ab8613b2c5c3f4e4701ebf45ff50c186f839e5bf89155474ad8368642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dzmhz" podUID="64f4becf-45f9-4ea1-b810-64e0105909a1" Jul 10 08:08:06.157552 systemd[1]: run-netns-cni\x2dabe47fb1\x2ded7f\x2db625\x2d9e6e\x2dd9f27e71ba38.mount: Deactivated successfully. Jul 10 08:08:06.327867 containerd[1541]: time="2025-07-10T08:08:06.327177135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-2glfs,Uid:2e114b1b-2c96-4efe-a1be-ea79fce4d83b,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:06.328566 containerd[1541]: time="2025-07-10T08:08:06.328508255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-6lgxs,Uid:7cfbfed2-71d2-4845-87fb-586f7e82aee0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:06.342539 containerd[1541]: time="2025-07-10T08:08:06.342346657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb6c98c9-b2xgw,Uid:484dc2e9-1fdd-49a0-8de6-26a6311505ad,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:06.343774 containerd[1541]: time="2025-07-10T08:08:06.343667578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-nbhkp,Uid:8af92dae-48c2-42f6-af64-9a1c2fb06ebb,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:06.418839 systemd[1]: Created slice kubepods-besteffort-pod673eda05_b391_4262_883e_c41d9f384dbd.slice - libcontainer container kubepods-besteffort-pod673eda05_b391_4262_883e_c41d9f384dbd.slice. Jul 10 08:08:06.423458 containerd[1541]: time="2025-07-10T08:08:06.423408812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-986vz,Uid:673eda05-b391-4262-883e-c41d9f384dbd,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:06.548320 containerd[1541]: time="2025-07-10T08:08:06.548255895Z" level=error msg="Failed to destroy network for sandbox \"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.550072 containerd[1541]: time="2025-07-10T08:08:06.550027327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-nbhkp,Uid:8af92dae-48c2-42f6-af64-9a1c2fb06ebb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.550440 kubelet[2824]: E0710 08:08:06.550276 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.550440 kubelet[2824]: E0710 08:08:06.550359 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" Jul 10 08:08:06.550440 kubelet[2824]: E0710 08:08:06.550389 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" Jul 10 08:08:06.552288 kubelet[2824]: E0710 08:08:06.550442 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75494f88d7-nbhkp_calico-apiserver(8af92dae-48c2-42f6-af64-9a1c2fb06ebb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75494f88d7-nbhkp_calico-apiserver(8af92dae-48c2-42f6-af64-9a1c2fb06ebb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50d2c5f82dba0de19a9820497ad46a6e1ff1f808e25a6d95264deb470f4d81d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" podUID="8af92dae-48c2-42f6-af64-9a1c2fb06ebb" Jul 10 08:08:06.559575 containerd[1541]: time="2025-07-10T08:08:06.559424299Z" level=error msg="Failed to destroy network for sandbox \"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.561462 containerd[1541]: time="2025-07-10T08:08:06.561414704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-2glfs,Uid:2e114b1b-2c96-4efe-a1be-ea79fce4d83b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.561856 containerd[1541]: time="2025-07-10T08:08:06.561778366Z" level=error msg="Failed to destroy network for sandbox \"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.562263 kubelet[2824]: E0710 08:08:06.562157 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.562263 kubelet[2824]: E0710 08:08:06.562224 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" Jul 10 08:08:06.562263 kubelet[2824]: E0710 08:08:06.562248 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" Jul 10 08:08:06.562456 kubelet[2824]: E0710 08:08:06.562302 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b6b4cc7-2glfs_calico-apiserver(2e114b1b-2c96-4efe-a1be-ea79fce4d83b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b6b4cc7-2glfs_calico-apiserver(2e114b1b-2c96-4efe-a1be-ea79fce4d83b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1be3ab8628d929566ef9b9018939764884f8e9b09fbaa557c7d4ea542492e6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" podUID="2e114b1b-2c96-4efe-a1be-ea79fce4d83b" Jul 10 08:08:06.564839 containerd[1541]: time="2025-07-10T08:08:06.564528157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-6lgxs,Uid:7cfbfed2-71d2-4845-87fb-586f7e82aee0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.566346 kubelet[2824]: E0710 08:08:06.565402 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.566346 kubelet[2824]: E0710 08:08:06.565574 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" Jul 10 08:08:06.566346 kubelet[2824]: E0710 08:08:06.565619 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" Jul 10 08:08:06.566527 kubelet[2824]: E0710 08:08:06.565670 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698b6b4cc7-6lgxs_calico-apiserver(7cfbfed2-71d2-4845-87fb-586f7e82aee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698b6b4cc7-6lgxs_calico-apiserver(7cfbfed2-71d2-4845-87fb-586f7e82aee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60f6a090363cd6459f30616270fe168773538c03ba675bd15bdf9edf9f904541\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" podUID="7cfbfed2-71d2-4845-87fb-586f7e82aee0" Jul 10 08:08:06.574360 containerd[1541]: time="2025-07-10T08:08:06.574307788Z" level=error msg="Failed to destroy network for sandbox \"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.578151 containerd[1541]: time="2025-07-10T08:08:06.577912531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb6c98c9-b2xgw,Uid:484dc2e9-1fdd-49a0-8de6-26a6311505ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.580355 kubelet[2824]: E0710 08:08:06.580125 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.580355 kubelet[2824]: E0710 08:08:06.580212 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:06.580355 kubelet[2824]: E0710 08:08:06.580237 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:06.581129 kubelet[2824]: E0710 08:08:06.580289 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bb6c98c9-b2xgw_calico-system(484dc2e9-1fdd-49a0-8de6-26a6311505ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bb6c98c9-b2xgw_calico-system(484dc2e9-1fdd-49a0-8de6-26a6311505ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab2e0c1e4bc5a7d812dfa3e57f7fb9a578f6a6c65a334c20139eb87f0a37754a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bb6c98c9-b2xgw" podUID="484dc2e9-1fdd-49a0-8de6-26a6311505ad" Jul 10 08:08:06.601745 containerd[1541]: time="2025-07-10T08:08:06.601600499Z" level=error msg="Failed to destroy network for sandbox \"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.603787 containerd[1541]: time="2025-07-10T08:08:06.603728329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-986vz,Uid:673eda05-b391-4262-883e-c41d9f384dbd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.604209 kubelet[2824]: E0710 08:08:06.604145 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:06.604308 kubelet[2824]: E0710 08:08:06.604241 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-986vz" Jul 10 08:08:06.604308 kubelet[2824]: E0710 08:08:06.604265 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-986vz" Jul 10 08:08:06.604380 kubelet[2824]: E0710 08:08:06.604314 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-986vz_calico-system(673eda05-b391-4262-883e-c41d9f384dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-986vz_calico-system(673eda05-b391-4262-883e-c41d9f384dbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d89e4796ef33d5ebe196022ea64be54436a1c1abde210026f96528b6b719cc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-986vz" podUID="673eda05-b391-4262-883e-c41d9f384dbd" Jul 10 08:08:07.778938 kubelet[2824]: I0710 08:08:07.778416 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:08:17.411906 containerd[1541]: time="2025-07-10T08:08:17.411292701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:17.419112 containerd[1541]: time="2025-07-10T08:08:17.418913268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:17.754118 containerd[1541]: time="2025-07-10T08:08:17.752127918Z" level=error msg="Failed to destroy network for sandbox \"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.757197 containerd[1541]: time="2025-07-10T08:08:17.757154756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.757408 systemd[1]: run-netns-cni\x2d22a03d3c\x2d5780\x2d94f7\x2dc038\x2d2ba8d1b31af6.mount: Deactivated successfully. Jul 10 08:08:17.758408 kubelet[2824]: E0710 08:08:17.758017 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.758408 kubelet[2824]: E0710 08:08:17.758125 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:17.758408 kubelet[2824]: E0710 08:08:17.758161 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dzmhz" Jul 10 08:08:17.758809 kubelet[2824]: E0710 08:08:17.758234 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dzmhz_kube-system(64f4becf-45f9-4ea1-b810-64e0105909a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dzmhz_kube-system(64f4becf-45f9-4ea1-b810-64e0105909a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24f6e7e8e75cce92358ddfcbc36211091ffa0c99fe9c4d84104f4bce9e5b0195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dzmhz" podUID="64f4becf-45f9-4ea1-b810-64e0105909a1" Jul 10 08:08:17.767521 containerd[1541]: time="2025-07-10T08:08:17.767245495Z" level=error msg="Failed to destroy network for sandbox \"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.771566 systemd[1]: run-netns-cni\x2d12ed842a\x2db918\x2d7c75\x2dad0a\x2d388cb36da31b.mount: Deactivated successfully. Jul 10 08:08:17.772592 containerd[1541]: time="2025-07-10T08:08:17.771647039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.772882 kubelet[2824]: E0710 08:08:17.772811 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:17.773444 kubelet[2824]: E0710 08:08:17.772877 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:17.773444 kubelet[2824]: E0710 08:08:17.772915 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" Jul 10 08:08:17.774406 kubelet[2824]: E0710 08:08:17.774073 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cd68b8fff-mshq4_calico-system(ebfcfa0b-3df6-4671-b7ec-2f40d76fc497)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cd68b8fff-mshq4_calico-system(ebfcfa0b-3df6-4671-b7ec-2f40d76fc497)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2674f077012a2d5dae15638800ce619aae54329115d7bcf968d60463200f7562\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" podUID="ebfcfa0b-3df6-4671-b7ec-2f40d76fc497" Jul 10 08:08:18.407510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048659503.mount: Deactivated successfully. Jul 10 08:08:18.411298 containerd[1541]: time="2025-07-10T08:08:18.411218365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:18.411507 containerd[1541]: time="2025-07-10T08:08:18.411470380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:18.411622 containerd[1541]: time="2025-07-10T08:08:18.411587085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb6c98c9-b2xgw,Uid:484dc2e9-1fdd-49a0-8de6-26a6311505ad,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:18.465092 containerd[1541]: time="2025-07-10T08:08:18.464182057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:18.476986 containerd[1541]: time="2025-07-10T08:08:18.476663056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 08:08:18.492230 containerd[1541]: time="2025-07-10T08:08:18.487938416Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:18.502027 containerd[1541]: time="2025-07-10T08:08:18.501657566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:18.506212 containerd[1541]: time="2025-07-10T08:08:18.506169221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.541836242s" Jul 10 08:08:18.506212 containerd[1541]: time="2025-07-10T08:08:18.506204800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 08:08:18.560390 containerd[1541]: time="2025-07-10T08:08:18.560317570Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 08:08:18.624655 containerd[1541]: time="2025-07-10T08:08:18.624597902Z" level=info msg="Container 88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:18.632065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000862835.mount: Deactivated successfully. Jul 10 08:08:18.659136 containerd[1541]: time="2025-07-10T08:08:18.658984382Z" level=info msg="CreateContainer within sandbox \"02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\"" Jul 10 08:08:18.661887 containerd[1541]: time="2025-07-10T08:08:18.661842709Z" level=info msg="StartContainer for \"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\"" Jul 10 08:08:18.668881 containerd[1541]: time="2025-07-10T08:08:18.668599210Z" level=info msg="connecting to shim 88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310" address="unix:///run/containerd/s/78d0384cded1a545200ed3101d1b6a2bec36d3f994155fc1707ed029633dfd6c" protocol=ttrpc version=3 Jul 10 08:08:18.683163 containerd[1541]: time="2025-07-10T08:08:18.683077610Z" level=error msg="Failed to destroy network for sandbox \"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.690525 containerd[1541]: time="2025-07-10T08:08:18.690465145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.690753 kubelet[2824]: E0710 08:08:18.690716 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.690817 kubelet[2824]: E0710 08:08:18.690786 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:18.690862 kubelet[2824]: E0710 08:08:18.690813 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gxfms" Jul 10 08:08:18.690901 kubelet[2824]: E0710 08:08:18.690865 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-gxfms_calico-system(30537349-9698-4e4c-a82b-357050dfe52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-gxfms_calico-system(30537349-9698-4e4c-a82b-357050dfe52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0febad2645faacba074af557851f3f445118a78871ada02a8617a79a69ee027d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-gxfms" podUID="30537349-9698-4e4c-a82b-357050dfe52b" Jul 10 08:08:18.714135 containerd[1541]: time="2025-07-10T08:08:18.714058842Z" level=error msg="Failed to destroy network for sandbox \"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.717467 containerd[1541]: time="2025-07-10T08:08:18.717415196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.718263 kubelet[2824]: E0710 08:08:18.718087 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.718616 kubelet[2824]: E0710 08:08:18.718577 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:18.718686 kubelet[2824]: E0710 08:08:18.718615 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jz74t" Jul 10 08:08:18.718727 kubelet[2824]: E0710 08:08:18.718690 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jz74t_kube-system(01999a15-b0d2-4afb-bee2-2fe0206967d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jz74t_kube-system(01999a15-b0d2-4afb-bee2-2fe0206967d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5265910a65e0709f88631cc87a51ac2956173c01307a237838e99a9603434d45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jz74t" podUID="01999a15-b0d2-4afb-bee2-2fe0206967d2" Jul 10 08:08:18.719342 containerd[1541]: time="2025-07-10T08:08:18.719176372Z" level=error msg="Failed to destroy network for sandbox \"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.724101 containerd[1541]: time="2025-07-10T08:08:18.722924860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb6c98c9-b2xgw,Uid:484dc2e9-1fdd-49a0-8de6-26a6311505ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.724434 kubelet[2824]: E0710 08:08:18.723335 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 08:08:18.724434 kubelet[2824]: E0710 08:08:18.723398 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:18.724434 kubelet[2824]: E0710 08:08:18.723433 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bb6c98c9-b2xgw" Jul 10 08:08:18.725778 kubelet[2824]: E0710 08:08:18.725734 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bb6c98c9-b2xgw_calico-system(484dc2e9-1fdd-49a0-8de6-26a6311505ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bb6c98c9-b2xgw_calico-system(484dc2e9-1fdd-49a0-8de6-26a6311505ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cbb1aaae93c06002dbc97797f4e309514f5cacd526155419417551a2ddebebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bb6c98c9-b2xgw" podUID="484dc2e9-1fdd-49a0-8de6-26a6311505ad" Jul 10 08:08:18.760166 systemd[1]: Started cri-containerd-88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310.scope - libcontainer container 88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310. Jul 10 08:08:18.836126 containerd[1541]: time="2025-07-10T08:08:18.835926014Z" level=info msg="StartContainer for \"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" returns successfully" Jul 10 08:08:18.990322 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 08:08:18.990492 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 08:08:19.092003 kubelet[2824]: I0710 08:08:19.091891 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jr5b8" podStartSLOduration=1.453357792 podStartE2EDuration="31.09185755s" podCreationTimestamp="2025-07-10 08:07:48 +0000 UTC" firstStartedPulling="2025-07-10 08:07:48.869208011 +0000 UTC m=+23.720493415" lastFinishedPulling="2025-07-10 08:08:18.507707769 +0000 UTC m=+53.358993173" observedRunningTime="2025-07-10 08:08:19.088575681 +0000 UTC m=+53.939861106" watchObservedRunningTime="2025-07-10 08:08:19.09185755 +0000 UTC m=+53.943142954" Jul 10 08:08:19.256429 kubelet[2824]: I0710 08:08:19.254821 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-backend-key-pair\") pod \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " Jul 10 08:08:19.256658 kubelet[2824]: I0710 08:08:19.256637 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5ww\" (UniqueName: \"kubernetes.io/projected/484dc2e9-1fdd-49a0-8de6-26a6311505ad-kube-api-access-kz5ww\") pod \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " Jul 10 08:08:19.256769 kubelet[2824]: I0710 08:08:19.256754 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-ca-bundle\") pod \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\" (UID: \"484dc2e9-1fdd-49a0-8de6-26a6311505ad\") " Jul 10 08:08:19.257559 kubelet[2824]: I0710 08:08:19.257527 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "484dc2e9-1fdd-49a0-8de6-26a6311505ad" (UID: "484dc2e9-1fdd-49a0-8de6-26a6311505ad"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 08:08:19.268012 kubelet[2824]: I0710 08:08:19.267822 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484dc2e9-1fdd-49a0-8de6-26a6311505ad-kube-api-access-kz5ww" (OuterVolumeSpecName: "kube-api-access-kz5ww") pod "484dc2e9-1fdd-49a0-8de6-26a6311505ad" (UID: "484dc2e9-1fdd-49a0-8de6-26a6311505ad"). InnerVolumeSpecName "kube-api-access-kz5ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 08:08:19.269678 kubelet[2824]: I0710 08:08:19.269626 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "484dc2e9-1fdd-49a0-8de6-26a6311505ad" (UID: "484dc2e9-1fdd-49a0-8de6-26a6311505ad"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 08:08:19.358044 kubelet[2824]: I0710 08:08:19.357726 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-ca-bundle\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:08:19.358044 kubelet[2824]: I0710 08:08:19.358000 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/484dc2e9-1fdd-49a0-8de6-26a6311505ad-whisker-backend-key-pair\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:08:19.358044 kubelet[2824]: I0710 08:08:19.358017 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kz5ww\" (UniqueName: \"kubernetes.io/projected/484dc2e9-1fdd-49a0-8de6-26a6311505ad-kube-api-access-kz5ww\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:08:19.418103 systemd[1]: Removed slice kubepods-besteffort-pod484dc2e9_1fdd_49a0_8de6_26a6311505ad.slice - libcontainer container kubepods-besteffort-pod484dc2e9_1fdd_49a0_8de6_26a6311505ad.slice. Jul 10 08:08:19.519173 systemd[1]: run-netns-cni\x2d7537dacc\x2d0e19\x2dc700\x2d84aa\x2d2d2218c10138.mount: Deactivated successfully. Jul 10 08:08:19.519292 systemd[1]: run-netns-cni\x2d54b67b26\x2da09b\x2d8a0e\x2daf21\x2d7a2428a7ae31.mount: Deactivated successfully. Jul 10 08:08:19.519373 systemd[1]: run-netns-cni\x2dcf28c0dd\x2d524e\x2d07c8\x2de0a9\x2dfd16a6ac1b77.mount: Deactivated successfully. Jul 10 08:08:19.519455 systemd[1]: var-lib-kubelet-pods-484dc2e9\x2d1fdd\x2d49a0\x2d8de6\x2d26a6311505ad-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 08:08:19.519572 systemd[1]: var-lib-kubelet-pods-484dc2e9\x2d1fdd\x2d49a0\x2d8de6\x2d26a6311505ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkz5ww.mount: Deactivated successfully. Jul 10 08:08:20.358305 systemd[1]: Created slice kubepods-besteffort-pod6e6d5318_fff7_4dce_8824_2ef8c9a97985.slice - libcontainer container kubepods-besteffort-pod6e6d5318_fff7_4dce_8824_2ef8c9a97985.slice. Jul 10 08:08:20.366303 kubelet[2824]: I0710 08:08:20.366214 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6d5318-fff7-4dce-8824-2ef8c9a97985-whisker-ca-bundle\") pod \"whisker-74b599d7df-4qd72\" (UID: \"6e6d5318-fff7-4dce-8824-2ef8c9a97985\") " pod="calico-system/whisker-74b599d7df-4qd72" Jul 10 08:08:20.369660 kubelet[2824]: I0710 08:08:20.368155 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qpx\" (UniqueName: \"kubernetes.io/projected/6e6d5318-fff7-4dce-8824-2ef8c9a97985-kube-api-access-d8qpx\") pod \"whisker-74b599d7df-4qd72\" (UID: \"6e6d5318-fff7-4dce-8824-2ef8c9a97985\") " pod="calico-system/whisker-74b599d7df-4qd72" Jul 10 08:08:20.369660 kubelet[2824]: I0710 08:08:20.368242 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e6d5318-fff7-4dce-8824-2ef8c9a97985-whisker-backend-key-pair\") pod \"whisker-74b599d7df-4qd72\" (UID: \"6e6d5318-fff7-4dce-8824-2ef8c9a97985\") " pod="calico-system/whisker-74b599d7df-4qd72" Jul 10 08:08:20.410976 containerd[1541]: time="2025-07-10T08:08:20.410212387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-2glfs,Uid:2e114b1b-2c96-4efe-a1be-ea79fce4d83b,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:20.670925 containerd[1541]: time="2025-07-10T08:08:20.670837873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b599d7df-4qd72,Uid:6e6d5318-fff7-4dce-8824-2ef8c9a97985,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:20.698790 systemd-networkd[1456]: cali3fedde4012d: Link UP Jul 10 08:08:20.699600 systemd-networkd[1456]: cali3fedde4012d: Gained carrier Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.459 [INFO][4143] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.541 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0 calico-apiserver-698b6b4cc7- calico-apiserver 2e114b1b-2c96-4efe-a1be-ea79fce4d83b 843 0 2025-07-10 08:07:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b6b4cc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal calico-apiserver-698b6b4cc7-2glfs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fedde4012d [] [] }} ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.541 [INFO][4143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.604 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.604 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cef30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"calico-apiserver-698b6b4cc7-2glfs", "timestamp":"2025-07-10 08:08:20.604587082 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.605 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.605 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.605 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.614 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.621 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.627 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.629 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.631 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.632 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.633 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.641 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.649 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.1/26] block=192.168.95.0/26 handle="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.650 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.1/26] handle="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:20.745928 containerd[1541]: 2025-07-10 08:08:20.650 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.650 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.1/26] IPv6=[] ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.662 [INFO][4143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0", GenerateName:"calico-apiserver-698b6b4cc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e114b1b-2c96-4efe-a1be-ea79fce4d83b", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6b4cc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"calico-apiserver-698b6b4cc7-2glfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fedde4012d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.662 [INFO][4143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.1/32] ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.662 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fedde4012d ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.700 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.746703 containerd[1541]: 2025-07-10 08:08:20.702 [INFO][4143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0", GenerateName:"calico-apiserver-698b6b4cc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e114b1b-2c96-4efe-a1be-ea79fce4d83b", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6b4cc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658", Pod:"calico-apiserver-698b6b4cc7-2glfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fedde4012d", MAC:"be:4d:00:61:67:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:20.746991 containerd[1541]: 2025-07-10 08:08:20.736 [INFO][4143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-2glfs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:08:20.847523 containerd[1541]: time="2025-07-10T08:08:20.846872941Z" level=info msg="connecting to shim 7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" address="unix:///run/containerd/s/5f9b7f757d1b25367a901d8537360d297fc070a9e7234475423eb8c4c1f33fce" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:20.962294 systemd[1]: Started cri-containerd-7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658.scope - libcontainer container 7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658. Jul 10 08:08:21.113311 systemd-networkd[1456]: cali9af20afa4eb: Link UP Jul 10 08:08:21.115902 systemd-networkd[1456]: cali9af20afa4eb: Gained carrier Jul 10 08:08:21.229214 containerd[1541]: time="2025-07-10T08:08:21.229075145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-2glfs,Uid:2e114b1b-2c96-4efe-a1be-ea79fce4d83b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\"" Jul 10 08:08:21.236662 containerd[1541]: time="2025-07-10T08:08:21.236349964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.797 [INFO][4165] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.825 [INFO][4165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0 whisker-74b599d7df- calico-system 6e6d5318-fff7-4dce-8824-2ef8c9a97985 934 0 2025-07-10 08:08:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74b599d7df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal whisker-74b599d7df-4qd72 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9af20afa4eb [] [] }} ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.826 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.992 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" HandleID="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.992 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" HandleID="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"whisker-74b599d7df-4qd72", "timestamp":"2025-07-10 08:08:20.989106555 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.992 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.992 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:20.992 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.021 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.032 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.044 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.048 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.052 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.052 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.055 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263 Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.066 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.105 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.2/26] block=192.168.95.0/26 handle="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.105 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.2/26] handle="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.105 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:21.239667 containerd[1541]: 2025-07-10 08:08:21.105 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.2/26] IPv6=[] ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" HandleID="k8s-pod-network.84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.111 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0", GenerateName:"whisker-74b599d7df-", Namespace:"calico-system", SelfLink:"", UID:"6e6d5318-fff7-4dce-8824-2ef8c9a97985", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b599d7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"whisker-74b599d7df-4qd72", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9af20afa4eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.111 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.2/32] ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.111 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9af20afa4eb ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.115 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.116 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0", GenerateName:"whisker-74b599d7df-", Namespace:"calico-system", SelfLink:"", UID:"6e6d5318-fff7-4dce-8824-2ef8c9a97985", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b599d7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263", Pod:"whisker-74b599d7df-4qd72", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9af20afa4eb", MAC:"32:9d:05:20:bd:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:21.240770 containerd[1541]: 2025-07-10 08:08:21.233 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" Namespace="calico-system" Pod="whisker-74b599d7df-4qd72" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-whisker--74b599d7df--4qd72-eth0" Jul 10 08:08:21.344155 containerd[1541]: time="2025-07-10T08:08:21.343921054Z" level=info msg="connecting to shim 84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263" address="unix:///run/containerd/s/fb7dadd3b8bae6eede254c5dfafd247d2839fa4b851b95975e62356a6240bc8d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:21.419559 containerd[1541]: time="2025-07-10T08:08:21.419246103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-nbhkp,Uid:8af92dae-48c2-42f6-af64-9a1c2fb06ebb,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:21.421659 systemd[1]: Started cri-containerd-84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263.scope - libcontainer container 84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263. Jul 10 08:08:21.423066 containerd[1541]: time="2025-07-10T08:08:21.422550913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-6lgxs,Uid:7cfbfed2-71d2-4845-87fb-586f7e82aee0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:08:21.424047 containerd[1541]: time="2025-07-10T08:08:21.423855710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-986vz,Uid:673eda05-b391-4262-883e-c41d9f384dbd,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:21.579037 kubelet[2824]: I0710 08:08:21.577630 2824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484dc2e9-1fdd-49a0-8de6-26a6311505ad" path="/var/lib/kubelet/pods/484dc2e9-1fdd-49a0-8de6-26a6311505ad/volumes" Jul 10 08:08:21.882823 containerd[1541]: time="2025-07-10T08:08:21.882676555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b599d7df-4qd72,Uid:6e6d5318-fff7-4dce-8824-2ef8c9a97985,Namespace:calico-system,Attempt:0,} returns sandbox id \"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263\"" Jul 10 08:08:21.987081 systemd-networkd[1456]: cali3fedde4012d: Gained IPv6LL Jul 10 08:08:22.357498 systemd-networkd[1456]: vxlan.calico: Link UP Jul 10 08:08:22.357512 systemd-networkd[1456]: vxlan.calico: Gained carrier Jul 10 08:08:22.964274 systemd-networkd[1456]: cali63c541a1854: Link UP Jul 10 08:08:22.965019 systemd-networkd[1456]: cali63c541a1854: Gained carrier Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.737 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0 calico-apiserver-698b6b4cc7- calico-apiserver 7cfbfed2-71d2-4845-87fb-586f7e82aee0 844 0 2025-07-10 08:07:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698b6b4cc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal calico-apiserver-698b6b4cc7-6lgxs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63c541a1854 [] [] }} ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.737 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.856 [INFO][4483] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.857 [INFO][4483] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"calico-apiserver-698b6b4cc7-6lgxs", "timestamp":"2025-07-10 08:08:22.856760307 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.857 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.857 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.857 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.889 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.901 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.912 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.918 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.922 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.923 [INFO][4483] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.927 [INFO][4483] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.936 [INFO][4483] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.952 [INFO][4483] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.3/26] block=192.168.95.0/26 handle="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.954 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.3/26] handle="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.002396 containerd[1541]: 2025-07-10 08:08:22.954 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.954 [INFO][4483] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.3/26] IPv6=[] ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.959 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0", GenerateName:"calico-apiserver-698b6b4cc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cfbfed2-71d2-4845-87fb-586f7e82aee0", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6b4cc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"calico-apiserver-698b6b4cc7-6lgxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c541a1854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.960 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.3/32] ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.960 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63c541a1854 ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.965 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.003821 containerd[1541]: 2025-07-10 08:08:22.966 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0", GenerateName:"calico-apiserver-698b6b4cc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cfbfed2-71d2-4845-87fb-586f7e82aee0", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698b6b4cc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff", Pod:"calico-apiserver-698b6b4cc7-6lgxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63c541a1854", MAC:"82:4d:16:ef:94:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.005340 containerd[1541]: 2025-07-10 08:08:22.999 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Namespace="calico-apiserver" Pod="calico-apiserver-698b6b4cc7-6lgxs" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:08:23.070419 containerd[1541]: time="2025-07-10T08:08:23.070356530Z" level=info msg="connecting to shim 1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" address="unix:///run/containerd/s/24c408011a5a8663d9d9e1e8caac7b4f0a90f1972d0fb15198aba81e4a152439" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:23.073151 systemd-networkd[1456]: cali9af20afa4eb: Gained IPv6LL Jul 10 08:08:23.093312 systemd-networkd[1456]: cali89b11189b4d: Link UP Jul 10 08:08:23.101696 systemd-networkd[1456]: cali89b11189b4d: Gained carrier Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.755 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0 calico-apiserver-75494f88d7- calico-apiserver 8af92dae-48c2-42f6-af64-9a1c2fb06ebb 848 0 2025-07-10 08:07:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75494f88d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal calico-apiserver-75494f88d7-nbhkp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89b11189b4d [] [] }} ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.756 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.893 [INFO][4488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" HandleID="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.893 [INFO][4488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" HandleID="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"calico-apiserver-75494f88d7-nbhkp", "timestamp":"2025-07-10 08:08:22.892668512 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.893 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.954 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.955 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:22.995 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.009 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.019 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.024 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.031 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.031 [INFO][4488] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.033 [INFO][4488] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08 Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.049 [INFO][4488] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.067 [INFO][4488] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.4/26] block=192.168.95.0/26 handle="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.067 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.4/26] handle="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.149945 containerd[1541]: 2025-07-10 08:08:23.069 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.069 [INFO][4488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.4/26] IPv6=[] ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" HandleID="k8s-pod-network.7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.076 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0", GenerateName:"calico-apiserver-75494f88d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8af92dae-48c2-42f6-af64-9a1c2fb06ebb", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75494f88d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"calico-apiserver-75494f88d7-nbhkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89b11189b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.076 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.4/32] ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.076 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89b11189b4d ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.111 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.152040 containerd[1541]: 2025-07-10 08:08:23.111 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0", GenerateName:"calico-apiserver-75494f88d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8af92dae-48c2-42f6-af64-9a1c2fb06ebb", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75494f88d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08", Pod:"calico-apiserver-75494f88d7-nbhkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89b11189b4d", MAC:"02:84:56:de:ae:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.153794 containerd[1541]: 2025-07-10 08:08:23.143 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-nbhkp" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--nbhkp-eth0" Jul 10 08:08:23.191231 systemd[1]: Started cri-containerd-1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff.scope - libcontainer container 1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff. Jul 10 08:08:23.233349 containerd[1541]: time="2025-07-10T08:08:23.230376662Z" level=info msg="connecting to shim 7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08" address="unix:///run/containerd/s/3d1fff46c86ff327e74d0565a4c94596803b6c9816879b8ab08716f852cd7071" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:23.239037 systemd-networkd[1456]: cali08d5215dac0: Link UP Jul 10 08:08:23.241395 systemd-networkd[1456]: cali08d5215dac0: Gained carrier Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:22.818 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0 csi-node-driver- calico-system 673eda05-b391-4262-883e-c41d9f384dbd 715 0 2025-07-10 08:07:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal csi-node-driver-986vz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali08d5215dac0 [] [] }} ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:22.818 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:22.931 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" HandleID="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:22.931 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" HandleID="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"csi-node-driver-986vz", "timestamp":"2025-07-10 08:08:22.931492648 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:22.932 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.069 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.069 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.102 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.138 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.156 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.164 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.172 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.172 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.177 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50 Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.190 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.213 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.5/26] block=192.168.95.0/26 handle="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.217 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.5/26] handle="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.217 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:23.275130 containerd[1541]: 2025-07-10 08:08:23.217 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.5/26] IPv6=[] ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" HandleID="k8s-pod-network.c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.227 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"673eda05-b391-4262-883e-c41d9f384dbd", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"csi-node-driver-986vz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08d5215dac0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.230 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.5/32] ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.230 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08d5215dac0 ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.243 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.244 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"673eda05-b391-4262-883e-c41d9f384dbd", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50", Pod:"csi-node-driver-986vz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08d5215dac0", MAC:"42:7c:e2:92:a9:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:23.275870 containerd[1541]: 2025-07-10 08:08:23.272 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" Namespace="calico-system" Pod="csi-node-driver-986vz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-csi--node--driver--986vz-eth0" Jul 10 08:08:23.298623 systemd[1]: Started cri-containerd-7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08.scope - libcontainer container 7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08. Jul 10 08:08:23.365472 containerd[1541]: time="2025-07-10T08:08:23.365412815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698b6b4cc7-6lgxs,Uid:7cfbfed2-71d2-4845-87fb-586f7e82aee0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\"" Jul 10 08:08:23.367985 containerd[1541]: time="2025-07-10T08:08:23.367119724Z" level=info msg="connecting to shim c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50" address="unix:///run/containerd/s/67393172428930dfb528f632fe1c193086943533824fc6cc00112a5e40473ac7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:23.414161 containerd[1541]: time="2025-07-10T08:08:23.413635530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-nbhkp,Uid:8af92dae-48c2-42f6-af64-9a1c2fb06ebb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08\"" Jul 10 08:08:23.418315 systemd[1]: Started cri-containerd-c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50.scope - libcontainer container c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50. Jul 10 08:08:23.456493 containerd[1541]: time="2025-07-10T08:08:23.456418436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-986vz,Uid:673eda05-b391-4262-883e-c41d9f384dbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50\"" Jul 10 08:08:23.842446 systemd-networkd[1456]: vxlan.calico: Gained IPv6LL Jul 10 08:08:24.225679 systemd-networkd[1456]: cali63c541a1854: Gained IPv6LL Jul 10 08:08:25.058850 systemd-networkd[1456]: cali89b11189b4d: Gained IPv6LL Jul 10 08:08:25.185145 systemd-networkd[1456]: cali08d5215dac0: Gained IPv6LL Jul 10 08:08:27.444448 containerd[1541]: time="2025-07-10T08:08:27.444354795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:27.446002 containerd[1541]: time="2025-07-10T08:08:27.445832407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 08:08:27.447408 containerd[1541]: time="2025-07-10T08:08:27.447360333Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:27.451977 containerd[1541]: time="2025-07-10T08:08:27.451425476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:27.452422 containerd[1541]: time="2025-07-10T08:08:27.452395376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 6.215998393s" Jul 10 08:08:27.452527 containerd[1541]: time="2025-07-10T08:08:27.452507905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 08:08:27.454234 containerd[1541]: time="2025-07-10T08:08:27.454209183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 08:08:27.458634 containerd[1541]: time="2025-07-10T08:08:27.457938974Z" level=info msg="CreateContainer within sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 08:08:27.490292 containerd[1541]: time="2025-07-10T08:08:27.490227915Z" level=info msg="Container b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:27.498443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132702468.mount: Deactivated successfully. Jul 10 08:08:27.510607 containerd[1541]: time="2025-07-10T08:08:27.510551311Z" level=info msg="CreateContainer within sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\"" Jul 10 08:08:27.511444 containerd[1541]: time="2025-07-10T08:08:27.511401789Z" level=info msg="StartContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\"" Jul 10 08:08:27.513829 containerd[1541]: time="2025-07-10T08:08:27.513787586Z" level=info msg="connecting to shim b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb" address="unix:///run/containerd/s/5f9b7f757d1b25367a901d8537360d297fc070a9e7234475423eb8c4c1f33fce" protocol=ttrpc version=3 Jul 10 08:08:27.553156 systemd[1]: Started cri-containerd-b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb.scope - libcontainer container b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb. Jul 10 08:08:27.646520 containerd[1541]: time="2025-07-10T08:08:27.646459015Z" level=info msg="StartContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" returns successfully" Jul 10 08:08:28.146541 kubelet[2824]: I0710 08:08:28.146472 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698b6b4cc7-2glfs" podStartSLOduration=37.928796757 podStartE2EDuration="44.146450462s" podCreationTimestamp="2025-07-10 08:07:44 +0000 UTC" firstStartedPulling="2025-07-10 08:08:21.236085135 +0000 UTC m=+56.087370549" lastFinishedPulling="2025-07-10 08:08:27.45373885 +0000 UTC m=+62.305024254" observedRunningTime="2025-07-10 08:08:28.144466787 +0000 UTC m=+62.995752201" watchObservedRunningTime="2025-07-10 08:08:28.146450462 +0000 UTC m=+62.997735876" Jul 10 08:08:28.411818 containerd[1541]: time="2025-07-10T08:08:28.411325543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:29.034846 systemd-networkd[1456]: calife130c10b61: Link UP Jul 10 08:08:29.036546 systemd-networkd[1456]: calife130c10b61: Gained carrier Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.567 [INFO][4755] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0 coredns-668d6bf9bc- kube-system 64f4becf-45f9-4ea1-b810-64e0105909a1 833 0 2025-07-10 08:07:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal coredns-668d6bf9bc-dzmhz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife130c10b61 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.567 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.639 [INFO][4766] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" HandleID="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.639 [INFO][4766] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" HandleID="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"coredns-668d6bf9bc-dzmhz", "timestamp":"2025-07-10 08:08:28.639289614 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.639 [INFO][4766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.639 [INFO][4766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.639 [INFO][4766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.781 [INFO][4766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.798 [INFO][4766] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.856 [INFO][4766] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.862 [INFO][4766] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.873 [INFO][4766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.873 [INFO][4766] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.877 [INFO][4766] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308 Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:28.986 [INFO][4766] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:29.019 [INFO][4766] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.6/26] block=192.168.95.0/26 handle="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:29.020 [INFO][4766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.6/26] handle="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:29.020 [INFO][4766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:29.074510 containerd[1541]: 2025-07-10 08:08:29.020 [INFO][4766] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.6/26] IPv6=[] ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" HandleID="k8s-pod-network.137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.077882 containerd[1541]: 2025-07-10 08:08:29.025 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"64f4becf-45f9-4ea1-b810-64e0105909a1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-dzmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife130c10b61", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:29.077882 containerd[1541]: 2025-07-10 08:08:29.027 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.6/32] ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.077882 containerd[1541]: 2025-07-10 08:08:29.027 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife130c10b61 ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.077882 containerd[1541]: 2025-07-10 08:08:29.037 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.077882 containerd[1541]: 2025-07-10 08:08:29.037 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"64f4becf-45f9-4ea1-b810-64e0105909a1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308", Pod:"coredns-668d6bf9bc-dzmhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife130c10b61", MAC:"a6:a8:5f:ca:af:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:29.078224 containerd[1541]: 2025-07-10 08:08:29.062 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" Namespace="kube-system" Pod="coredns-668d6bf9bc-dzmhz" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--dzmhz-eth0" Jul 10 08:08:29.123990 kubelet[2824]: I0710 08:08:29.123669 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:08:29.167908 containerd[1541]: time="2025-07-10T08:08:29.167737802Z" level=info msg="connecting to shim 137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308" address="unix:///run/containerd/s/5024f84e6209b8e402895ef0da9416421ce74013ddbe5b706385f6f250087a3b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:29.225242 systemd[1]: Started cri-containerd-137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308.scope - libcontainer container 137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308. Jul 10 08:08:29.340808 containerd[1541]: time="2025-07-10T08:08:29.340518149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dzmhz,Uid:64f4becf-45f9-4ea1-b810-64e0105909a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308\"" Jul 10 08:08:29.351684 containerd[1541]: time="2025-07-10T08:08:29.350732126Z" level=info msg="CreateContainer within sandbox \"137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 08:08:29.386405 containerd[1541]: time="2025-07-10T08:08:29.386326497Z" level=info msg="Container ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:29.407838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052703716.mount: Deactivated successfully. Jul 10 08:08:29.429591 containerd[1541]: time="2025-07-10T08:08:29.427909098Z" level=info msg="CreateContainer within sandbox \"137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8\"" Jul 10 08:08:29.430647 containerd[1541]: time="2025-07-10T08:08:29.429868511Z" level=info msg="StartContainer for \"ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8\"" Jul 10 08:08:29.435131 containerd[1541]: time="2025-07-10T08:08:29.434989781Z" level=info msg="connecting to shim ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8" address="unix:///run/containerd/s/5024f84e6209b8e402895ef0da9416421ce74013ddbe5b706385f6f250087a3b" protocol=ttrpc version=3 Jul 10 08:08:29.495517 systemd[1]: Started cri-containerd-ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8.scope - libcontainer container ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8. Jul 10 08:08:29.573107 containerd[1541]: time="2025-07-10T08:08:29.572191799Z" level=info msg="StartContainer for \"ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8\" returns successfully" Jul 10 08:08:30.176922 kubelet[2824]: I0710 08:08:30.176517 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dzmhz" podStartSLOduration=61.176496586 podStartE2EDuration="1m1.176496586s" podCreationTimestamp="2025-07-10 08:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:08:30.174438947 +0000 UTC m=+65.025724361" watchObservedRunningTime="2025-07-10 08:08:30.176496586 +0000 UTC m=+65.027782010" Jul 10 08:08:30.231487 containerd[1541]: time="2025-07-10T08:08:30.231116027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:30.234706 containerd[1541]: time="2025-07-10T08:08:30.234675439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 08:08:30.239143 containerd[1541]: time="2025-07-10T08:08:30.239081947Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:30.245544 containerd[1541]: time="2025-07-10T08:08:30.245471335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:30.248570 containerd[1541]: time="2025-07-10T08:08:30.248106187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.793779467s" Jul 10 08:08:30.248570 containerd[1541]: time="2025-07-10T08:08:30.248139679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 08:08:30.251112 containerd[1541]: time="2025-07-10T08:08:30.250882112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 08:08:30.252667 containerd[1541]: time="2025-07-10T08:08:30.252187150Z" level=info msg="CreateContainer within sandbox \"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 08:08:30.271936 containerd[1541]: time="2025-07-10T08:08:30.271893715Z" level=info msg="Container 9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:30.308073 containerd[1541]: time="2025-07-10T08:08:30.307694303Z" level=info msg="CreateContainer within sandbox \"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a\"" Jul 10 08:08:30.309355 containerd[1541]: time="2025-07-10T08:08:30.309292085Z" level=info msg="StartContainer for \"9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a\"" Jul 10 08:08:30.313072 containerd[1541]: time="2025-07-10T08:08:30.313032575Z" level=info msg="connecting to shim 9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a" address="unix:///run/containerd/s/fb7dadd3b8bae6eede254c5dfafd247d2839fa4b851b95975e62356a6240bc8d" protocol=ttrpc version=3 Jul 10 08:08:30.370394 systemd[1]: Started cri-containerd-9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a.scope - libcontainer container 9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a. Jul 10 08:08:30.411491 containerd[1541]: time="2025-07-10T08:08:30.411361979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,}" Jul 10 08:08:30.434662 systemd-networkd[1456]: calife130c10b61: Gained IPv6LL Jul 10 08:08:30.602888 containerd[1541]: time="2025-07-10T08:08:30.602484568Z" level=info msg="StartContainer for \"9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a\" returns successfully" Jul 10 08:08:30.741494 systemd-networkd[1456]: cali64746e22796: Link UP Jul 10 08:08:30.743294 systemd-networkd[1456]: cali64746e22796: Gained carrier Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.562 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0 coredns-668d6bf9bc- kube-system 01999a15-b0d2-4afb-bee2-2fe0206967d2 836 0 2025-07-10 08:07:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal coredns-668d6bf9bc-jz74t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64746e22796 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.565 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.667 [INFO][4913] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" HandleID="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.667 [INFO][4913] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" HandleID="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324c90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"coredns-668d6bf9bc-jz74t", "timestamp":"2025-07-10 08:08:30.667256528 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.667 [INFO][4913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.667 [INFO][4913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.667 [INFO][4913] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.678 [INFO][4913] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.685 [INFO][4913] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.691 [INFO][4913] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.694 [INFO][4913] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.697 [INFO][4913] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.697 [INFO][4913] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.699 [INFO][4913] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.723 [INFO][4913] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.732 [INFO][4913] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.7/26] block=192.168.95.0/26 handle="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.732 [INFO][4913] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.7/26] handle="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.732 [INFO][4913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:30.768525 containerd[1541]: 2025-07-10 08:08:30.732 [INFO][4913] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.7/26] IPv6=[] ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" HandleID="k8s-pod-network.b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.769679 containerd[1541]: 2025-07-10 08:08:30.735 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"01999a15-b0d2-4afb-bee2-2fe0206967d2", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-jz74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64746e22796", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:30.769679 containerd[1541]: 2025-07-10 08:08:30.736 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.7/32] ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.769679 containerd[1541]: 2025-07-10 08:08:30.736 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64746e22796 ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.769679 containerd[1541]: 2025-07-10 08:08:30.742 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.769679 containerd[1541]: 2025-07-10 08:08:30.744 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"01999a15-b0d2-4afb-bee2-2fe0206967d2", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd", Pod:"coredns-668d6bf9bc-jz74t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64746e22796", MAC:"f6:ce:34:fa:de:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:30.770537 containerd[1541]: 2025-07-10 08:08:30.761 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" Namespace="kube-system" Pod="coredns-668d6bf9bc-jz74t" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-coredns--668d6bf9bc--jz74t-eth0" Jul 10 08:08:30.871230 containerd[1541]: time="2025-07-10T08:08:30.871124269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:30.876774 containerd[1541]: time="2025-07-10T08:08:30.875705733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 08:08:30.880121 containerd[1541]: time="2025-07-10T08:08:30.879394266Z" level=info msg="connecting to shim b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd" address="unix:///run/containerd/s/84749d0b44dee4de90a7a617f015a5447a33cf05b0b51b1a9e2c0a0c631c64f0" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:30.887299 containerd[1541]: time="2025-07-10T08:08:30.887267364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 635.835569ms" Jul 10 08:08:30.887720 containerd[1541]: time="2025-07-10T08:08:30.887406162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 08:08:30.889452 containerd[1541]: time="2025-07-10T08:08:30.889113178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 08:08:30.892832 containerd[1541]: time="2025-07-10T08:08:30.892777105Z" level=info msg="CreateContainer within sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 08:08:30.918980 containerd[1541]: time="2025-07-10T08:08:30.918656584Z" level=info msg="Container 15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:30.932224 systemd[1]: Started cri-containerd-b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd.scope - libcontainer container b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd. Jul 10 08:08:30.945979 containerd[1541]: time="2025-07-10T08:08:30.945300376Z" level=info msg="CreateContainer within sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\"" Jul 10 08:08:30.949775 containerd[1541]: time="2025-07-10T08:08:30.949718165Z" level=info msg="StartContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\"" Jul 10 08:08:30.954221 containerd[1541]: time="2025-07-10T08:08:30.954182161Z" level=info msg="connecting to shim 15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24" address="unix:///run/containerd/s/24c408011a5a8663d9d9e1e8caac7b4f0a90f1972d0fb15198aba81e4a152439" protocol=ttrpc version=3 Jul 10 08:08:30.999202 systemd[1]: Started cri-containerd-15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24.scope - libcontainer container 15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24. Jul 10 08:08:31.039995 containerd[1541]: time="2025-07-10T08:08:31.039853773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jz74t,Uid:01999a15-b0d2-4afb-bee2-2fe0206967d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd\"" Jul 10 08:08:31.045036 containerd[1541]: time="2025-07-10T08:08:31.044907613Z" level=info msg="CreateContainer within sandbox \"b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 08:08:31.071340 containerd[1541]: time="2025-07-10T08:08:31.071286882Z" level=info msg="Container 0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:31.085915 containerd[1541]: time="2025-07-10T08:08:31.085659023Z" level=info msg="CreateContainer within sandbox \"b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea\"" Jul 10 08:08:31.087834 containerd[1541]: time="2025-07-10T08:08:31.087396788Z" level=info msg="StartContainer for \"0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea\"" Jul 10 08:08:31.089436 containerd[1541]: time="2025-07-10T08:08:31.088890399Z" level=info msg="connecting to shim 0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea" address="unix:///run/containerd/s/84749d0b44dee4de90a7a617f015a5447a33cf05b0b51b1a9e2c0a0c631c64f0" protocol=ttrpc version=3 Jul 10 08:08:31.097812 containerd[1541]: time="2025-07-10T08:08:31.097771656Z" level=info msg="StartContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" returns successfully" Jul 10 08:08:31.124178 systemd[1]: Started cri-containerd-0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea.scope - libcontainer container 0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea. Jul 10 08:08:31.209307 kubelet[2824]: I0710 08:08:31.208622 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698b6b4cc7-6lgxs" podStartSLOduration=39.691429653 podStartE2EDuration="47.208601544s" podCreationTimestamp="2025-07-10 08:07:44 +0000 UTC" firstStartedPulling="2025-07-10 08:08:23.37130896 +0000 UTC m=+58.222594364" lastFinishedPulling="2025-07-10 08:08:30.888480841 +0000 UTC m=+65.739766255" observedRunningTime="2025-07-10 08:08:31.206568127 +0000 UTC m=+66.057853541" watchObservedRunningTime="2025-07-10 08:08:31.208601544 +0000 UTC m=+66.059886958" Jul 10 08:08:31.275426 containerd[1541]: time="2025-07-10T08:08:31.275226359Z" level=info msg="StartContainer for \"0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea\" returns successfully" Jul 10 08:08:31.410346 containerd[1541]: time="2025-07-10T08:08:31.410098493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:31.438182 containerd[1541]: time="2025-07-10T08:08:31.438119268Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:31.439578 containerd[1541]: time="2025-07-10T08:08:31.439534002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 08:08:31.448503 containerd[1541]: time="2025-07-10T08:08:31.448447830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 559.30129ms" Jul 10 08:08:31.448784 containerd[1541]: time="2025-07-10T08:08:31.448657020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 08:08:31.451000 containerd[1541]: time="2025-07-10T08:08:31.450353539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 08:08:31.452973 containerd[1541]: time="2025-07-10T08:08:31.452923163Z" level=info msg="CreateContainer within sandbox \"7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 08:08:31.477314 containerd[1541]: time="2025-07-10T08:08:31.477261131Z" level=info msg="Container 1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:31.504039 containerd[1541]: time="2025-07-10T08:08:31.503937412Z" level=info msg="CreateContainer within sandbox \"7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736\"" Jul 10 08:08:31.505744 containerd[1541]: time="2025-07-10T08:08:31.505294570Z" level=info msg="StartContainer for \"1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736\"" Jul 10 08:08:31.508147 containerd[1541]: time="2025-07-10T08:08:31.508091918Z" level=info msg="connecting to shim 1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736" address="unix:///run/containerd/s/3d1fff46c86ff327e74d0565a4c94596803b6c9816879b8ab08716f852cd7071" protocol=ttrpc version=3 Jul 10 08:08:31.564155 systemd[1]: Started cri-containerd-1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736.scope - libcontainer container 1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736. Jul 10 08:08:31.746903 systemd-networkd[1456]: calid1d21a2d925: Link UP Jul 10 08:08:31.749571 systemd-networkd[1456]: calid1d21a2d925: Gained carrier Jul 10 08:08:31.761504 containerd[1541]: time="2025-07-10T08:08:31.761210869Z" level=info msg="StartContainer for \"1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736\" returns successfully" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.512 [INFO][5047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0 goldmane-768f4c5c69- calico-system 30537349-9698-4e4c-a82b-357050dfe52b 842 0 2025-07-10 08:07:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal goldmane-768f4c5c69-gxfms eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid1d21a2d925 [] [] }} ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.514 [INFO][5047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.606 [INFO][5065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" HandleID="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.606 [INFO][5065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" HandleID="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036ad70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"goldmane-768f4c5c69-gxfms", "timestamp":"2025-07-10 08:08:31.606146096 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.606 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.606 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.606 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.641 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.654 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.660 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.663 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.666 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.666 [INFO][5065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.668 [INFO][5065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7 Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.699 [INFO][5065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.732 [INFO][5065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.8/26] block=192.168.95.0/26 handle="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.732 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.8/26] handle="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.732 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:31.802387 containerd[1541]: 2025-07-10 08:08:31.732 [INFO][5065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.8/26] IPv6=[] ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" HandleID="k8s-pod-network.c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.738 [INFO][5047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"30537349-9698-4e4c-a82b-357050dfe52b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"goldmane-768f4c5c69-gxfms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1d21a2d925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.740 [INFO][5047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.8/32] ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.741 [INFO][5047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1d21a2d925 ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.746 [INFO][5047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.747 [INFO][5047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"30537349-9698-4e4c-a82b-357050dfe52b", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7", Pod:"goldmane-768f4c5c69-gxfms", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1d21a2d925", MAC:"1a:84:02:ed:5c:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:31.805472 containerd[1541]: 2025-07-10 08:08:31.793 [INFO][5047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gxfms" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-goldmane--768f4c5c69--gxfms-eth0" Jul 10 08:08:31.864229 containerd[1541]: time="2025-07-10T08:08:31.863300843Z" level=info msg="connecting to shim c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7" address="unix:///run/containerd/s/909dd07702da6cd1be75e7f98e969f05107beb6784d242e70d3d21a25c9504cc" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:31.902127 systemd[1]: Started cri-containerd-c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7.scope - libcontainer container c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7. Jul 10 08:08:32.075239 containerd[1541]: time="2025-07-10T08:08:32.075151978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gxfms,Uid:30537349-9698-4e4c-a82b-357050dfe52b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7\"" Jul 10 08:08:32.195944 kubelet[2824]: I0710 08:08:32.195183 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:08:32.219278 kubelet[2824]: I0710 08:08:32.218844 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75494f88d7-nbhkp" podStartSLOduration=39.185154522 podStartE2EDuration="47.218815809s" podCreationTimestamp="2025-07-10 08:07:45 +0000 UTC" firstStartedPulling="2025-07-10 08:08:23.416210756 +0000 UTC m=+58.267496170" lastFinishedPulling="2025-07-10 08:08:31.449872052 +0000 UTC m=+66.301157457" observedRunningTime="2025-07-10 08:08:32.217892099 +0000 UTC m=+67.069177513" watchObservedRunningTime="2025-07-10 08:08:32.218815809 +0000 UTC m=+67.070101243" Jul 10 08:08:32.284365 kubelet[2824]: I0710 08:08:32.284159 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jz74t" podStartSLOduration=63.284139187 podStartE2EDuration="1m3.284139187s" podCreationTimestamp="2025-07-10 08:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:08:32.283746165 +0000 UTC m=+67.135031579" watchObservedRunningTime="2025-07-10 08:08:32.284139187 +0000 UTC m=+67.135424591" Jul 10 08:08:32.413494 containerd[1541]: time="2025-07-10T08:08:32.413441336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,}" Jul 10 08:08:32.482152 systemd-networkd[1456]: cali64746e22796: Gained IPv6LL Jul 10 08:08:32.870305 systemd-networkd[1456]: cali84920950b34: Link UP Jul 10 08:08:32.878202 systemd-networkd[1456]: cali84920950b34: Gained carrier Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.611 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0 calico-kube-controllers-6cd68b8fff- calico-system ebfcfa0b-3df6-4671-b7ec-2f40d76fc497 840 0 2025-07-10 08:07:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cd68b8fff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal calico-kube-controllers-6cd68b8fff-mshq4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84920950b34 [] [] }} ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.613 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.701 [INFO][5169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" HandleID="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.701 [INFO][5169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" HandleID="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ade90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"calico-kube-controllers-6cd68b8fff-mshq4", "timestamp":"2025-07-10 08:08:32.701120302 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.701 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.701 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.701 [INFO][5169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.758 [INFO][5169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.776 [INFO][5169] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.786 [INFO][5169] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.790 [INFO][5169] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.795 [INFO][5169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.795 [INFO][5169] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.799 [INFO][5169] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1 Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.813 [INFO][5169] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.850 [INFO][5169] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.9/26] block=192.168.95.0/26 handle="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.850 [INFO][5169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.9/26] handle="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:08:32.935601 containerd[1541]: 2025-07-10 08:08:32.850 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:08:32.937117 containerd[1541]: 2025-07-10 08:08:32.851 [INFO][5169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.9/26] IPv6=[] ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" HandleID="k8s-pod-network.d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.937117 containerd[1541]: 2025-07-10 08:08:32.857 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0", GenerateName:"calico-kube-controllers-6cd68b8fff-", Namespace:"calico-system", SelfLink:"", UID:"ebfcfa0b-3df6-4671-b7ec-2f40d76fc497", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd68b8fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6cd68b8fff-mshq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84920950b34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:32.937117 containerd[1541]: 2025-07-10 08:08:32.859 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.9/32] ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.937117 containerd[1541]: 2025-07-10 08:08:32.859 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84920950b34 ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.937117 containerd[1541]: 2025-07-10 08:08:32.869 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:32.937464 containerd[1541]: 2025-07-10 08:08:32.872 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0", GenerateName:"calico-kube-controllers-6cd68b8fff-", Namespace:"calico-system", SelfLink:"", UID:"ebfcfa0b-3df6-4671-b7ec-2f40d76fc497", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 7, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd68b8fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1", Pod:"calico-kube-controllers-6cd68b8fff-mshq4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84920950b34", MAC:"56:b2:39:1d:e0:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:08:32.937464 containerd[1541]: 2025-07-10 08:08:32.931 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" Namespace="calico-system" Pod="calico-kube-controllers-6cd68b8fff-mshq4" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--kube--controllers--6cd68b8fff--mshq4-eth0" Jul 10 08:08:33.026677 containerd[1541]: time="2025-07-10T08:08:33.026584946Z" level=info msg="connecting to shim d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1" address="unix:///run/containerd/s/93df6878d4909b7a67b5c00cf928dd7ec274431f8a70e2165b5cb97312b7023c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:08:33.084216 systemd[1]: Started cri-containerd-d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1.scope - libcontainer container d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1. Jul 10 08:08:33.197258 kubelet[2824]: I0710 08:08:33.196883 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:08:33.242482 containerd[1541]: time="2025-07-10T08:08:33.242288221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd68b8fff-mshq4,Uid:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1\"" Jul 10 08:08:33.506292 systemd-networkd[1456]: calid1d21a2d925: Gained IPv6LL Jul 10 08:08:34.081220 systemd-networkd[1456]: cali84920950b34: Gained IPv6LL Jul 10 08:08:34.419293 containerd[1541]: time="2025-07-10T08:08:34.419166989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:34.426033 containerd[1541]: time="2025-07-10T08:08:34.424500913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 08:08:34.431352 containerd[1541]: time="2025-07-10T08:08:34.431270246Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:34.442089 containerd[1541]: time="2025-07-10T08:08:34.441205538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:08:34.445930 containerd[1541]: time="2025-07-10T08:08:34.445806434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.995405506s" Jul 10 08:08:34.445930 containerd[1541]: time="2025-07-10T08:08:34.445895190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 08:08:34.452391 containerd[1541]: time="2025-07-10T08:08:34.452272732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 08:08:34.463519 containerd[1541]: time="2025-07-10T08:08:34.463449649Z" level=info msg="CreateContainer within sandbox \"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 08:08:34.512387 containerd[1541]: time="2025-07-10T08:08:34.512299523Z" level=info msg="Container 3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:08:34.584673 containerd[1541]: time="2025-07-10T08:08:34.584577889Z" level=info msg="CreateContainer within sandbox \"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72\"" Jul 10 08:08:34.588230 containerd[1541]: time="2025-07-10T08:08:34.588176825Z" level=info msg="StartContainer for \"3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72\"" Jul 10 08:08:34.595960 containerd[1541]: time="2025-07-10T08:08:34.595861848Z" level=info msg="connecting to shim 3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72" address="unix:///run/containerd/s/67393172428930dfb528f632fe1c193086943533824fc6cc00112a5e40473ac7" protocol=ttrpc version=3 Jul 10 08:08:34.664415 systemd[1]: Started cri-containerd-3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72.scope - libcontainer container 3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72. Jul 10 08:08:34.803447 containerd[1541]: time="2025-07-10T08:08:34.803306060Z" level=info msg="StartContainer for \"3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72\" returns successfully" Jul 10 08:08:39.112812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034605321.mount: Deactivated successfully. Jul 10 08:08:48.910309 systemd[1]: cri-containerd-06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c.scope: Deactivated successfully. Jul 10 08:08:48.910859 systemd[1]: cri-containerd-06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c.scope: Consumed 3.890s CPU time, 55.1M memory peak, 64K read from disk. Jul 10 08:08:48.956850 systemd[1]: cri-containerd-493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36.scope: Deactivated successfully. Jul 10 08:08:48.957229 systemd[1]: cri-containerd-493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36.scope: Consumed 6.346s CPU time, 79.4M memory peak. Jul 10 08:08:53.943277 containerd[1541]: time="2025-07-10T08:08:53.941491564Z" level=info msg="received exit event container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" Jul 10 08:08:53.944354 containerd[1541]: time="2025-07-10T08:08:53.944160525Z" level=info msg="received exit event container_id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" pid:2671 exit_status:1 exited_at:{seconds:1752134928 nanos:939362609}" Jul 10 08:08:54.043210 containerd[1541]: time="2025-07-10T08:08:54.042722433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" pid:2671 exit_status:1 exited_at:{seconds:1752134928 nanos:939362609}" Jul 10 08:08:54.043388 containerd[1541]: time="2025-07-10T08:08:54.043229770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" Jul 10 08:08:55.559155 systemd[1]: cri-containerd-898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31.scope: Deactivated successfully. Jul 10 08:09:05.089324 kubelet[2824]: E0710 08:09:01.270160 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:08:53.956490179Z" level=error msg="post event" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:08:54.042845816Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:01.233514379Z" level=error msg="forward event" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.055929430Z" level=error msg="get state for 06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.056041463Z" level=warning msg="unknown status" status=0 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:01.232422996Z" level=error msg="post event" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.061386248Z" level=info msg="received exit event container_id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" pid:2644 exit_status:1 exited_at:{seconds:1752134935 nanos:564927407}" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.086880434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" pid:2644 exit_status:1 exited_at:{seconds:1752134935 nanos:564927407}" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.086932603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.087028894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.087067828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" id:\"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" pid:2644 exit_status:1 exited_at:{seconds:1752134935 nanos:564927407}" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.087141077Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.180356903Z" level=error msg="get state for 06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.180397921Z" level=warning msg="unknown status" status=0 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.188360600Z" level=error msg="failed to handle container TaskExit event container_id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" pid:2671 exit_status:1 exited_at:{seconds:1752134928 nanos:939362609}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.088026853Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.088769078Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.194022925Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.245364246Z" level=error msg="get state for 493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36" error="context deadline exceeded" Jul 10 08:09:05.283213 containerd[1541]: time="2025-07-10T08:09:05.245400205Z" level=warning msg="unknown status" status=0 Jul 10 08:08:55.559575 systemd[1]: cri-containerd-898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31.scope: Consumed 2.244s CPU time, 21.8M memory peak. Jul 10 08:09:05.345867 kubelet[2824]: E0710 08:09:01.255650 2824 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal.1850d5707212cc59 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal,UID:6bd47e81634a1fad90cea695d58949a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:08:53.974010969 +0000 UTC m=+88.825296373,LastTimestamp:2025-07-10 08:08:53.974010969 +0000 UTC m=+88.825296373,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:09:05.346253 containerd[1541]: time="2025-07-10T08:09:05.309216865Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Jul 10 08:09:05.346253 containerd[1541]: time="2025-07-10T08:09:05.309309199Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Jul 10 08:09:05.232599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c-rootfs.mount: Deactivated successfully. Jul 10 08:09:05.351981 kubelet[2824]: E0710 08:09:05.350985 2824 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.941s" Jul 10 08:09:05.441496 kubelet[2824]: E0710 08:09:05.441300 2824 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4391-0-0-n-29a01ddc69.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 10 08:09:05.534319 containerd[1541]: time="2025-07-10T08:09:05.530196558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"93ba8d834a3dc372d8e02a2e538e876ce360c0b62b81a4d823890b20a41f1eaf\" pid:5319 exited_at:{seconds:1752134945 nanos:481555607}" Jul 10 08:09:05.675236 containerd[1541]: time="2025-07-10T08:09:05.674391310Z" level=error msg="get state for 493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36" error="context deadline exceeded" Jul 10 08:09:05.675236 containerd[1541]: time="2025-07-10T08:09:05.674442206Z" level=warning msg="unknown status" status=0 Jul 10 08:09:05.676995 containerd[1541]: time="2025-07-10T08:09:05.675904934Z" level=error msg="failed to handle container TaskExit event container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:09:05.698466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36-rootfs.mount: Deactivated successfully. Jul 10 08:09:05.709810 containerd[1541]: time="2025-07-10T08:09:05.709619427Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 10 08:09:05.710335 containerd[1541]: time="2025-07-10T08:09:05.710150863Z" level=error msg="ttrpc: received message on inactive stream" stream=35 Jul 10 08:09:05.827510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31-rootfs.mount: Deactivated successfully. Jul 10 08:09:05.881589 containerd[1541]: time="2025-07-10T08:09:05.881447834Z" level=error msg="collecting metrics for 898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31" error="ttrpc: closed" Jul 10 08:09:05.995496 containerd[1541]: time="2025-07-10T08:09:05.992440883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"380799bf313ace27681ef19ee744f59dae708da04b788990fac8311a75fff650\" pid:5358 exited_at:{seconds:1752134945 nanos:921908063}" Jul 10 08:09:06.597349 containerd[1541]: time="2025-07-10T08:09:06.597040239Z" level=info msg="TaskExit event container_id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" id:\"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" pid:2671 exit_status:1 exited_at:{seconds:1752134928 nanos:939362609}" Jul 10 08:09:08.762065 containerd[1541]: time="2025-07-10T08:09:08.760403400Z" level=info msg="TaskExit event container_id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" id:\"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" pid:3216 exit_status:1 exited_at:{seconds:1752134928 nanos:956039642}" Jul 10 08:09:08.902807 containerd[1541]: time="2025-07-10T08:09:08.902697356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:08.912607 containerd[1541]: time="2025-07-10T08:09:08.912505903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 08:09:08.919235 containerd[1541]: time="2025-07-10T08:09:08.919081586Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:08.934198 containerd[1541]: time="2025-07-10T08:09:08.932631147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:08.934198 containerd[1541]: time="2025-07-10T08:09:08.933794481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 34.481444343s" Jul 10 08:09:08.934198 containerd[1541]: time="2025-07-10T08:09:08.933870685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 08:09:08.944260 containerd[1541]: time="2025-07-10T08:09:08.944185379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 08:09:08.952087 containerd[1541]: time="2025-07-10T08:09:08.950434013Z" level=info msg="CreateContainer within sandbox \"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 08:09:09.007500 containerd[1541]: time="2025-07-10T08:09:09.007313669Z" level=info msg="Container fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:09.099623 containerd[1541]: time="2025-07-10T08:09:09.098869084Z" level=info msg="CreateContainer within sandbox \"84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1\"" Jul 10 08:09:09.102038 containerd[1541]: time="2025-07-10T08:09:09.101943899Z" level=info msg="StartContainer for \"fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1\"" Jul 10 08:09:09.104499 containerd[1541]: time="2025-07-10T08:09:09.104450308Z" level=info msg="connecting to shim fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1" address="unix:///run/containerd/s/fb7dadd3b8bae6eede254c5dfafd247d2839fa4b851b95975e62356a6240bc8d" protocol=ttrpc version=3 Jul 10 08:09:09.141236 systemd[1]: Started cri-containerd-fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1.scope - libcontainer container fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1. Jul 10 08:09:09.390127 containerd[1541]: time="2025-07-10T08:09:09.389112128Z" level=info msg="StartContainer for \"fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1\" returns successfully" Jul 10 08:09:09.434877 kubelet[2824]: I0710 08:09:09.434814 2824 scope.go:117] "RemoveContainer" containerID="898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31" Jul 10 08:09:09.442542 kubelet[2824]: I0710 08:09:09.442475 2824 scope.go:117] "RemoveContainer" containerID="06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c" Jul 10 08:09:09.444985 containerd[1541]: time="2025-07-10T08:09:09.444526720Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 10 08:09:09.447205 containerd[1541]: time="2025-07-10T08:09:09.447163285Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 10 08:09:09.447794 kubelet[2824]: I0710 08:09:09.447752 2824 scope.go:117] "RemoveContainer" containerID="d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02" Jul 10 08:09:09.448442 kubelet[2824]: I0710 08:09:09.448417 2824 scope.go:117] "RemoveContainer" containerID="493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36" Jul 10 08:09:09.453473 containerd[1541]: time="2025-07-10T08:09:09.453359272Z" level=info msg="RemoveContainer for \"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\"" Jul 10 08:09:09.454484 containerd[1541]: time="2025-07-10T08:09:09.454437484Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 10 08:09:09.618565 containerd[1541]: time="2025-07-10T08:09:09.617886168Z" level=info msg="RemoveContainer for \"d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02\" returns successfully" Jul 10 08:09:09.626052 containerd[1541]: time="2025-07-10T08:09:09.625790351Z" level=info msg="Container 29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:09.635043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223451060.mount: Deactivated successfully. Jul 10 08:09:09.638904 containerd[1541]: time="2025-07-10T08:09:09.638193797Z" level=info msg="Container e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:09.749170 containerd[1541]: time="2025-07-10T08:09:09.749072547Z" level=info msg="Container ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:09.888181 containerd[1541]: time="2025-07-10T08:09:09.887849515Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\"" Jul 10 08:09:09.891541 containerd[1541]: time="2025-07-10T08:09:09.891386075Z" level=info msg="StartContainer for \"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\"" Jul 10 08:09:09.899167 containerd[1541]: time="2025-07-10T08:09:09.898792354Z" level=info msg="connecting to shim 29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" protocol=ttrpc version=3 Jul 10 08:09:09.906416 containerd[1541]: time="2025-07-10T08:09:09.906302020Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\"" Jul 10 08:09:09.918009 containerd[1541]: time="2025-07-10T08:09:09.917298260Z" level=info msg="StartContainer for \"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\"" Jul 10 08:09:09.922987 containerd[1541]: time="2025-07-10T08:09:09.922803739Z" level=info msg="connecting to shim e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" protocol=ttrpc version=3 Jul 10 08:09:09.963424 containerd[1541]: time="2025-07-10T08:09:09.963342973Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\"" Jul 10 08:09:09.966549 containerd[1541]: time="2025-07-10T08:09:09.965355486Z" level=info msg="StartContainer for \"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\"" Jul 10 08:09:09.979664 containerd[1541]: time="2025-07-10T08:09:09.979319919Z" level=info msg="connecting to shim ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" protocol=ttrpc version=3 Jul 10 08:09:09.980408 systemd[1]: Started cri-containerd-29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c.scope - libcontainer container 29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c. Jul 10 08:09:09.992315 systemd[1]: Started cri-containerd-e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2.scope - libcontainer container e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2. Jul 10 08:09:10.036312 systemd[1]: Started cri-containerd-ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec.scope - libcontainer container ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec. Jul 10 08:09:10.151189 containerd[1541]: time="2025-07-10T08:09:10.151117492Z" level=info msg="StartContainer for \"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" returns successfully" Jul 10 08:09:10.176359 containerd[1541]: time="2025-07-10T08:09:10.176305197Z" level=info msg="StartContainer for \"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" returns successfully" Jul 10 08:09:10.194533 containerd[1541]: time="2025-07-10T08:09:10.194486967Z" level=info msg="StartContainer for \"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" returns successfully" Jul 10 08:09:10.542158 kubelet[2824]: I0710 08:09:10.540841 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74b599d7df-4qd72" podStartSLOduration=3.48522293 podStartE2EDuration="50.540757303s" podCreationTimestamp="2025-07-10 08:08:20 +0000 UTC" firstStartedPulling="2025-07-10 08:08:21.88540592 +0000 UTC m=+56.736691334" lastFinishedPulling="2025-07-10 08:09:08.940940243 +0000 UTC m=+103.792225707" observedRunningTime="2025-07-10 08:09:09.784669039 +0000 UTC m=+104.635954443" watchObservedRunningTime="2025-07-10 08:09:10.540757303 +0000 UTC m=+105.392042727" Jul 10 08:09:13.961261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825451037.mount: Deactivated successfully. Jul 10 08:09:14.814244 containerd[1541]: time="2025-07-10T08:09:14.814170611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:14.815875 containerd[1541]: time="2025-07-10T08:09:14.815849755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 08:09:14.817054 containerd[1541]: time="2025-07-10T08:09:14.816920125Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:14.820407 containerd[1541]: time="2025-07-10T08:09:14.820374726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:14.821410 containerd[1541]: time="2025-07-10T08:09:14.821352149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.877092801s" Jul 10 08:09:14.821483 containerd[1541]: time="2025-07-10T08:09:14.821420258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 08:09:14.825149 containerd[1541]: time="2025-07-10T08:09:14.825104906Z" level=info msg="CreateContainer within sandbox \"c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 08:09:14.825321 containerd[1541]: time="2025-07-10T08:09:14.825296870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 08:09:14.841367 containerd[1541]: time="2025-07-10T08:09:14.840220099Z" level=info msg="Container c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:14.849778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618475467.mount: Deactivated successfully. Jul 10 08:09:14.863008 containerd[1541]: time="2025-07-10T08:09:14.862927080Z" level=info msg="CreateContainer within sandbox \"c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\"" Jul 10 08:09:14.864168 containerd[1541]: time="2025-07-10T08:09:14.864137084Z" level=info msg="StartContainer for \"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\"" Jul 10 08:09:14.866297 containerd[1541]: time="2025-07-10T08:09:14.866260561Z" level=info msg="connecting to shim c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8" address="unix:///run/containerd/s/909dd07702da6cd1be75e7f98e969f05107beb6784d242e70d3d21a25c9504cc" protocol=ttrpc version=3 Jul 10 08:09:14.903212 systemd[1]: Started cri-containerd-c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8.scope - libcontainer container c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8. Jul 10 08:09:14.998526 containerd[1541]: time="2025-07-10T08:09:14.998467591Z" level=info msg="StartContainer for \"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" returns successfully" Jul 10 08:09:15.619290 kubelet[2824]: I0710 08:09:15.619110 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-gxfms" podStartSLOduration=45.873684542 podStartE2EDuration="1m28.619083249s" podCreationTimestamp="2025-07-10 08:07:47 +0000 UTC" firstStartedPulling="2025-07-10 08:08:32.077480949 +0000 UTC m=+66.928766353" lastFinishedPulling="2025-07-10 08:09:14.822879646 +0000 UTC m=+109.674165060" observedRunningTime="2025-07-10 08:09:15.617425205 +0000 UTC m=+110.468710609" watchObservedRunningTime="2025-07-10 08:09:15.619083249 +0000 UTC m=+110.470368653" Jul 10 08:09:15.743028 containerd[1541]: time="2025-07-10T08:09:15.742899663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"83e39122a0567d94d00fec889f899ae7fd576e872b2c55a3495d38588a1044de\" pid:5591 exited_at:{seconds:1752134955 nanos:742123912}" Jul 10 08:09:20.151659 containerd[1541]: time="2025-07-10T08:09:20.151101415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:20.156230 containerd[1541]: time="2025-07-10T08:09:20.153618704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 08:09:20.156230 containerd[1541]: time="2025-07-10T08:09:20.155033089Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:20.160102 containerd[1541]: time="2025-07-10T08:09:20.160006159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:20.160997 containerd[1541]: time="2025-07-10T08:09:20.160914844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.335500051s" Jul 10 08:09:20.161239 containerd[1541]: time="2025-07-10T08:09:20.161185138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 08:09:20.166497 containerd[1541]: time="2025-07-10T08:09:20.166471403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 08:09:20.205076 containerd[1541]: time="2025-07-10T08:09:20.205007158Z" level=info msg="CreateContainer within sandbox \"d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 08:09:20.226718 containerd[1541]: time="2025-07-10T08:09:20.226666947Z" level=info msg="Container a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:20.249932 containerd[1541]: time="2025-07-10T08:09:20.249882720Z" level=info msg="CreateContainer within sandbox \"d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\"" Jul 10 08:09:20.250725 containerd[1541]: time="2025-07-10T08:09:20.250680854Z" level=info msg="StartContainer for \"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\"" Jul 10 08:09:20.253241 containerd[1541]: time="2025-07-10T08:09:20.253206970Z" level=info msg="connecting to shim a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6" address="unix:///run/containerd/s/93df6878d4909b7a67b5c00cf928dd7ec274431f8a70e2165b5cb97312b7023c" protocol=ttrpc version=3 Jul 10 08:09:20.293193 systemd[1]: Started cri-containerd-a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6.scope - libcontainer container a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6. Jul 10 08:09:20.391085 containerd[1541]: time="2025-07-10T08:09:20.391028629Z" level=info msg="StartContainer for \"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" returns successfully" Jul 10 08:09:20.612491 kubelet[2824]: I0710 08:09:20.612077 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cd68b8fff-mshq4" podStartSLOduration=45.69044015 podStartE2EDuration="1m32.611717912s" podCreationTimestamp="2025-07-10 08:07:48 +0000 UTC" firstStartedPulling="2025-07-10 08:08:33.245434939 +0000 UTC m=+68.096720343" lastFinishedPulling="2025-07-10 08:09:20.166712641 +0000 UTC m=+115.017998105" observedRunningTime="2025-07-10 08:09:20.61027898 +0000 UTC m=+115.461564384" watchObservedRunningTime="2025-07-10 08:09:20.611717912 +0000 UTC m=+115.463003316" Jul 10 08:09:20.655204 containerd[1541]: time="2025-07-10T08:09:20.655142569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"52987930f7b2fdd73ce3b5c2334ca7cd74c49dcba4cfcd63de6fad06789aa883\" pid:5667 exited_at:{seconds:1752134960 nanos:654679831}" Jul 10 08:09:23.555036 containerd[1541]: time="2025-07-10T08:09:23.554239443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:23.560006 containerd[1541]: time="2025-07-10T08:09:23.558856893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 08:09:23.560006 containerd[1541]: time="2025-07-10T08:09:23.559407208Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:23.565478 containerd[1541]: time="2025-07-10T08:09:23.565409498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 08:09:23.568882 containerd[1541]: time="2025-07-10T08:09:23.568779098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.402112143s" Jul 10 08:09:23.569219 containerd[1541]: time="2025-07-10T08:09:23.568883616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 08:09:23.586483 containerd[1541]: time="2025-07-10T08:09:23.586316355Z" level=info msg="CreateContainer within sandbox \"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 08:09:23.612036 containerd[1541]: time="2025-07-10T08:09:23.611569668Z" level=info msg="Container 190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:23.634481 containerd[1541]: time="2025-07-10T08:09:23.634239284Z" level=info msg="CreateContainer within sandbox \"c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a\"" Jul 10 08:09:23.636991 containerd[1541]: time="2025-07-10T08:09:23.635151246Z" level=info msg="StartContainer for \"190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a\"" Jul 10 08:09:23.641072 containerd[1541]: time="2025-07-10T08:09:23.641002589Z" level=info msg="connecting to shim 190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a" address="unix:///run/containerd/s/67393172428930dfb528f632fe1c193086943533824fc6cc00112a5e40473ac7" protocol=ttrpc version=3 Jul 10 08:09:23.684184 systemd[1]: Started cri-containerd-190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a.scope - libcontainer container 190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a. Jul 10 08:09:23.754110 containerd[1541]: time="2025-07-10T08:09:23.754061378Z" level=info msg="StartContainer for \"190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a\" returns successfully" Jul 10 08:09:24.236521 kubelet[2824]: I0710 08:09:24.236388 2824 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 08:09:24.238486 kubelet[2824]: I0710 08:09:24.236559 2824 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 08:09:24.696017 kubelet[2824]: I0710 08:09:24.694844 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-986vz" podStartSLOduration=36.578877981 podStartE2EDuration="1m36.69474473s" podCreationTimestamp="2025-07-10 08:07:48 +0000 UTC" firstStartedPulling="2025-07-10 08:08:23.458766166 +0000 UTC m=+58.310051580" lastFinishedPulling="2025-07-10 08:09:23.574632875 +0000 UTC m=+118.425918329" observedRunningTime="2025-07-10 08:09:24.69013811 +0000 UTC m=+119.541423604" watchObservedRunningTime="2025-07-10 08:09:24.69474473 +0000 UTC m=+119.546030305" Jul 10 08:09:29.607919 containerd[1541]: time="2025-07-10T08:09:29.607473159Z" level=info msg="StopContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" with timeout 30 (s)" Jul 10 08:09:29.611776 containerd[1541]: time="2025-07-10T08:09:29.611749368Z" level=info msg="Stop container \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" with signal terminated" Jul 10 08:09:29.724317 systemd[1]: Created slice kubepods-besteffort-pod73fec34e_75dc_4975_a628_f1041b5dba11.slice - libcontainer container kubepods-besteffort-pod73fec34e_75dc_4975_a628_f1041b5dba11.slice. Jul 10 08:09:29.771627 kubelet[2824]: I0710 08:09:29.771419 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73fec34e-75dc-4975-a628-f1041b5dba11-calico-apiserver-certs\") pod \"calico-apiserver-75494f88d7-v8v2p\" (UID: \"73fec34e-75dc-4975-a628-f1041b5dba11\") " pod="calico-apiserver/calico-apiserver-75494f88d7-v8v2p" Jul 10 08:09:29.772462 kubelet[2824]: I0710 08:09:29.772380 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqwb\" (UniqueName: \"kubernetes.io/projected/73fec34e-75dc-4975-a628-f1041b5dba11-kube-api-access-8wqwb\") pod \"calico-apiserver-75494f88d7-v8v2p\" (UID: \"73fec34e-75dc-4975-a628-f1041b5dba11\") " pod="calico-apiserver/calico-apiserver-75494f88d7-v8v2p" Jul 10 08:09:29.774040 systemd[1]: cri-containerd-b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb.scope: Deactivated successfully. Jul 10 08:09:29.774530 systemd[1]: cri-containerd-b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb.scope: Consumed 1.357s CPU time, 45.2M memory peak, 4K read from disk. Jul 10 08:09:29.783239 containerd[1541]: time="2025-07-10T08:09:29.783101705Z" level=info msg="received exit event container_id:\"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" id:\"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" pid:4724 exit_status:1 exited_at:{seconds:1752134969 nanos:781195391}" Jul 10 08:09:29.785683 containerd[1541]: time="2025-07-10T08:09:29.785605644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" id:\"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" pid:4724 exit_status:1 exited_at:{seconds:1752134969 nanos:781195391}" Jul 10 08:09:29.857177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb-rootfs.mount: Deactivated successfully. Jul 10 08:09:30.125918 containerd[1541]: time="2025-07-10T08:09:30.125691769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-v8v2p,Uid:73fec34e-75dc-4975-a628-f1041b5dba11,Namespace:calico-apiserver,Attempt:0,}" Jul 10 08:09:31.087560 containerd[1541]: time="2025-07-10T08:09:31.087340041Z" level=info msg="StopContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" returns successfully" Jul 10 08:09:31.091076 containerd[1541]: time="2025-07-10T08:09:31.091011652Z" level=info msg="StopPodSandbox for \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\"" Jul 10 08:09:31.091286 containerd[1541]: time="2025-07-10T08:09:31.091168841Z" level=info msg="Container to stop \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 08:09:31.118115 systemd[1]: cri-containerd-7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658.scope: Deactivated successfully. Jul 10 08:09:31.132824 containerd[1541]: time="2025-07-10T08:09:31.132656318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" id:\"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" pid:4309 exit_status:137 exited_at:{seconds:1752134971 nanos:126198681}" Jul 10 08:09:31.248666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658-rootfs.mount: Deactivated successfully. Jul 10 08:09:31.254378 containerd[1541]: time="2025-07-10T08:09:31.254306296Z" level=info msg="shim disconnected" id=7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 namespace=k8s.io Jul 10 08:09:31.254676 containerd[1541]: time="2025-07-10T08:09:31.254645782Z" level=warning msg="cleaning up after shim disconnected" id=7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 namespace=k8s.io Jul 10 08:09:31.257777 containerd[1541]: time="2025-07-10T08:09:31.254824751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 08:09:31.301876 containerd[1541]: time="2025-07-10T08:09:31.301750970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"82b163f4fb24323d596e018171d6cf6c9ca2ab8f6fbb83f628bf4448d5939574\" pid:5762 exited_at:{seconds:1752134971 nanos:211250215}" Jul 10 08:09:31.308747 containerd[1541]: time="2025-07-10T08:09:31.306926631Z" level=info msg="received exit event sandbox_id:\"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" exit_status:137 exited_at:{seconds:1752134971 nanos:126198681}" Jul 10 08:09:31.309170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658-shm.mount: Deactivated successfully. Jul 10 08:09:31.407049 systemd-networkd[1456]: cali321970cdb01: Link UP Jul 10 08:09:31.410514 systemd-networkd[1456]: cali321970cdb01: Gained carrier Jul 10 08:09:31.430815 systemd-networkd[1456]: cali3fedde4012d: Link DOWN Jul 10 08:09:31.431404 systemd-networkd[1456]: cali3fedde4012d: Lost carrier Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.219 [INFO][5771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0 calico-apiserver-75494f88d7- calico-apiserver 73fec34e-75dc-4975-a628-f1041b5dba11 1190 0 2025-07-10 08:09:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75494f88d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4391-0-0-n-29a01ddc69.novalocal calico-apiserver-75494f88d7-v8v2p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali321970cdb01 [] [] }} ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.219 [INFO][5771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.285 [INFO][5808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" HandleID="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.286 [INFO][5808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" HandleID="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000352cd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4391-0-0-n-29a01ddc69.novalocal", "pod":"calico-apiserver-75494f88d7-v8v2p", "timestamp":"2025-07-10 08:09:31.28580874 +0000 UTC"}, Hostname:"ci-4391-0-0-n-29a01ddc69.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.286 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.286 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.287 [INFO][5808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4391-0-0-n-29a01ddc69.novalocal' Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.299 [INFO][5808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.316 [INFO][5808] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.334 [INFO][5808] ipam/ipam.go 511: Trying affinity for 192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.339 [INFO][5808] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.354 [INFO][5808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.355 [INFO][5808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.359 [INFO][5808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.372 [INFO][5808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.389 [INFO][5808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.10/26] block=192.168.95.0/26 handle="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.389 [INFO][5808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.10/26] handle="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" host="ci-4391-0-0-n-29a01ddc69.novalocal" Jul 10 08:09:31.456347 containerd[1541]: 2025-07-10 08:09:31.390 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.390 [INFO][5808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.10/26] IPv6=[] ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" HandleID="k8s-pod-network.8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.394 [INFO][5771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0", GenerateName:"calico-apiserver-75494f88d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"73fec34e-75dc-4975-a628-f1041b5dba11", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75494f88d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"", Pod:"calico-apiserver-75494f88d7-v8v2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali321970cdb01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.395 [INFO][5771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.10/32] ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.395 [INFO][5771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali321970cdb01 ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.411 [INFO][5771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.457510 containerd[1541]: 2025-07-10 08:09:31.412 [INFO][5771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0", GenerateName:"calico-apiserver-75494f88d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"73fec34e-75dc-4975-a628-f1041b5dba11", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 8, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75494f88d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4391-0-0-n-29a01ddc69.novalocal", ContainerID:"8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a", Pod:"calico-apiserver-75494f88d7-v8v2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.10/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali321970cdb01", MAC:"42:02:de:bb:15:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 08:09:31.457771 containerd[1541]: 2025-07-10 08:09:31.451 [INFO][5771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" Namespace="calico-apiserver" Pod="calico-apiserver-75494f88d7-v8v2p" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--75494f88d7--v8v2p-eth0" Jul 10 08:09:31.535001 containerd[1541]: time="2025-07-10T08:09:31.533673894Z" level=info msg="connecting to shim 8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a" address="unix:///run/containerd/s/8d0df19f1a38d75f77dca37b45669766f1db1219c8ee8a9382abf02d4f2b393a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 08:09:31.594200 systemd[1]: Started cri-containerd-8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a.scope - libcontainer container 8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a. Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.419 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.425 [INFO][5839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" iface="eth0" netns="/var/run/netns/cni-88872e84-8377-81ba-2a4d-edd0aed2592b" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.427 [INFO][5839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" iface="eth0" netns="/var/run/netns/cni-88872e84-8377-81ba-2a4d-edd0aed2592b" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.441 [INFO][5839] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" after=14.552759ms iface="eth0" netns="/var/run/netns/cni-88872e84-8377-81ba-2a4d-edd0aed2592b" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.441 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.441 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.494 [INFO][5850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.494 [INFO][5850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.494 [INFO][5850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.614 [INFO][5850] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.614 [INFO][5850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.617 [INFO][5850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:09:31.622054 containerd[1541]: 2025-07-10 08:09:31.620 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:09:31.624751 containerd[1541]: time="2025-07-10T08:09:31.622674587Z" level=info msg="TearDown network for sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" successfully" Jul 10 08:09:31.624751 containerd[1541]: time="2025-07-10T08:09:31.622757275Z" level=info msg="StopPodSandbox for \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" returns successfully" Jul 10 08:09:31.694243 kubelet[2824]: I0710 08:09:31.693644 2824 scope.go:117] "RemoveContainer" containerID="b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb" Jul 10 08:09:31.698300 kubelet[2824]: I0710 08:09:31.698215 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-calico-apiserver-certs\") pod \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\" (UID: \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\") " Jul 10 08:09:31.698916 kubelet[2824]: I0710 08:09:31.698355 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qpd\" (UniqueName: \"kubernetes.io/projected/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-kube-api-access-28qpd\") pod \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\" (UID: \"2e114b1b-2c96-4efe-a1be-ea79fce4d83b\") " Jul 10 08:09:31.707391 containerd[1541]: time="2025-07-10T08:09:31.707194801Z" level=info msg="RemoveContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\"" Jul 10 08:09:31.712903 kubelet[2824]: I0710 08:09:31.712817 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-kube-api-access-28qpd" (OuterVolumeSpecName: "kube-api-access-28qpd") pod "2e114b1b-2c96-4efe-a1be-ea79fce4d83b" (UID: "2e114b1b-2c96-4efe-a1be-ea79fce4d83b"). InnerVolumeSpecName "kube-api-access-28qpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 08:09:31.715059 kubelet[2824]: I0710 08:09:31.714786 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2e114b1b-2c96-4efe-a1be-ea79fce4d83b" (UID: "2e114b1b-2c96-4efe-a1be-ea79fce4d83b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 08:09:31.718382 containerd[1541]: time="2025-07-10T08:09:31.718321987Z" level=info msg="RemoveContainer for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" returns successfully" Jul 10 08:09:31.720007 kubelet[2824]: I0710 08:09:31.719290 2824 scope.go:117] "RemoveContainer" containerID="b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb" Jul 10 08:09:31.722251 containerd[1541]: time="2025-07-10T08:09:31.722200030Z" level=error msg="ContainerStatus for \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\": not found" Jul 10 08:09:31.722588 kubelet[2824]: E0710 08:09:31.722544 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\": not found" containerID="b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb" Jul 10 08:09:31.722878 kubelet[2824]: I0710 08:09:31.722624 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb"} err="failed to get container status \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb\": not found" Jul 10 08:09:31.756568 containerd[1541]: time="2025-07-10T08:09:31.756513179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75494f88d7-v8v2p,Uid:73fec34e-75dc-4975-a628-f1041b5dba11,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a\"" Jul 10 08:09:31.763517 containerd[1541]: time="2025-07-10T08:09:31.763457601Z" level=info msg="CreateContainer within sandbox \"8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 08:09:31.776412 containerd[1541]: time="2025-07-10T08:09:31.776341436Z" level=info msg="Container b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:09:31.800817 kubelet[2824]: I0710 08:09:31.800746 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28qpd\" (UniqueName: \"kubernetes.io/projected/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-kube-api-access-28qpd\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:09:31.800817 kubelet[2824]: I0710 08:09:31.800793 2824 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e114b1b-2c96-4efe-a1be-ea79fce4d83b-calico-apiserver-certs\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:09:31.842040 containerd[1541]: time="2025-07-10T08:09:31.841915621Z" level=info msg="CreateContainer within sandbox \"8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d\"" Jul 10 08:09:31.844047 containerd[1541]: time="2025-07-10T08:09:31.843353384Z" level=info msg="StartContainer for \"b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d\"" Jul 10 08:09:31.846917 containerd[1541]: time="2025-07-10T08:09:31.846789556Z" level=info msg="connecting to shim b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d" address="unix:///run/containerd/s/8d0df19f1a38d75f77dca37b45669766f1db1219c8ee8a9382abf02d4f2b393a" protocol=ttrpc version=3 Jul 10 08:09:31.895338 systemd[1]: Started cri-containerd-b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d.scope - libcontainer container b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d. Jul 10 08:09:31.930686 systemd[1]: run-netns-cni\x2d88872e84\x2d8377\x2d81ba\x2d2a4d\x2dedd0aed2592b.mount: Deactivated successfully. Jul 10 08:09:31.930823 systemd[1]: var-lib-kubelet-pods-2e114b1b\x2d2c96\x2d4efe\x2da1be\x2dea79fce4d83b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d28qpd.mount: Deactivated successfully. Jul 10 08:09:31.930933 systemd[1]: var-lib-kubelet-pods-2e114b1b\x2d2c96\x2d4efe\x2da1be\x2dea79fce4d83b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 08:09:32.001071 systemd[1]: Removed slice kubepods-besteffort-pod2e114b1b_2c96_4efe_a1be_ea79fce4d83b.slice - libcontainer container kubepods-besteffort-pod2e114b1b_2c96_4efe_a1be_ea79fce4d83b.slice. Jul 10 08:09:32.001227 systemd[1]: kubepods-besteffort-pod2e114b1b_2c96_4efe_a1be_ea79fce4d83b.slice: Consumed 1.412s CPU time, 45.4M memory peak, 4K read from disk. Jul 10 08:09:32.022941 containerd[1541]: time="2025-07-10T08:09:32.022870295Z" level=info msg="StartContainer for \"b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d\" returns successfully" Jul 10 08:09:33.026246 systemd-networkd[1456]: cali321970cdb01: Gained IPv6LL Jul 10 08:09:33.434032 kubelet[2824]: I0710 08:09:33.433710 2824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e114b1b-2c96-4efe-a1be-ea79fce4d83b" path="/var/lib/kubelet/pods/2e114b1b-2c96-4efe-a1be-ea79fce4d83b/volumes" Jul 10 08:09:34.722880 kubelet[2824]: I0710 08:09:34.722808 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 08:09:34.952986 kubelet[2824]: I0710 08:09:34.952745 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75494f88d7-v8v2p" podStartSLOduration=5.952695589 podStartE2EDuration="5.952695589s" podCreationTimestamp="2025-07-10 08:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 08:09:32.767120183 +0000 UTC m=+127.618405597" watchObservedRunningTime="2025-07-10 08:09:34.952695589 +0000 UTC m=+129.803981013" Jul 10 08:09:35.102371 containerd[1541]: time="2025-07-10T08:09:35.102116879Z" level=info msg="StopContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" with timeout 30 (s)" Jul 10 08:09:35.107970 containerd[1541]: time="2025-07-10T08:09:35.107618284Z" level=info msg="Stop container \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" with signal terminated" Jul 10 08:09:35.323137 systemd[1]: cri-containerd-15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24.scope: Deactivated successfully. Jul 10 08:09:35.323738 systemd[1]: cri-containerd-15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24.scope: Consumed 1.121s CPU time, 42.2M memory peak. Jul 10 08:09:35.332208 containerd[1541]: time="2025-07-10T08:09:35.331896135Z" level=info msg="received exit event container_id:\"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" id:\"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" pid:4986 exit_status:1 exited_at:{seconds:1752134975 nanos:330248923}" Jul 10 08:09:35.333531 containerd[1541]: time="2025-07-10T08:09:35.333401941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" id:\"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" pid:4986 exit_status:1 exited_at:{seconds:1752134975 nanos:330248923}" Jul 10 08:09:35.410164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24-rootfs.mount: Deactivated successfully. Jul 10 08:09:35.566643 containerd[1541]: time="2025-07-10T08:09:35.566571636Z" level=info msg="StopContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" returns successfully" Jul 10 08:09:35.568204 containerd[1541]: time="2025-07-10T08:09:35.568071078Z" level=info msg="StopPodSandbox for \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\"" Jul 10 08:09:35.568466 containerd[1541]: time="2025-07-10T08:09:35.568425772Z" level=info msg="Container to stop \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 08:09:35.635480 systemd[1]: cri-containerd-1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff.scope: Deactivated successfully. Jul 10 08:09:35.642184 containerd[1541]: time="2025-07-10T08:09:35.642018420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" id:\"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" pid:4586 exit_status:137 exited_at:{seconds:1752134975 nanos:637929212}" Jul 10 08:09:35.731565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff-rootfs.mount: Deactivated successfully. Jul 10 08:09:35.737461 containerd[1541]: time="2025-07-10T08:09:35.737174233Z" level=info msg="shim disconnected" id=1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff namespace=k8s.io Jul 10 08:09:35.737461 containerd[1541]: time="2025-07-10T08:09:35.737230591Z" level=warning msg="cleaning up after shim disconnected" id=1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff namespace=k8s.io Jul 10 08:09:35.738100 containerd[1541]: time="2025-07-10T08:09:35.737245539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 08:09:35.743288 containerd[1541]: time="2025-07-10T08:09:35.743237146Z" level=info msg="received exit event sandbox_id:\"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" exit_status:137 exited_at:{seconds:1752134975 nanos:637929212}" Jul 10 08:09:35.753585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff-shm.mount: Deactivated successfully. Jul 10 08:09:35.842857 systemd[1]: Started sshd@9-172.24.4.5:22-172.24.4.1:56950.service - OpenSSH per-connection server daemon (172.24.4.1:56950). Jul 10 08:09:36.081823 systemd-networkd[1456]: cali63c541a1854: Link DOWN Jul 10 08:09:36.081835 systemd-networkd[1456]: cali63c541a1854: Lost carrier Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.076 [INFO][6031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.077 [INFO][6031] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" iface="eth0" netns="/var/run/netns/cni-b3eaa407-d318-618b-5101-5709e5193664" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.078 [INFO][6031] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" iface="eth0" netns="/var/run/netns/cni-b3eaa407-d318-618b-5101-5709e5193664" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.105 [INFO][6031] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" after=27.981498ms iface="eth0" netns="/var/run/netns/cni-b3eaa407-d318-618b-5101-5709e5193664" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.106 [INFO][6031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.106 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.375 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.376 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.376 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.563 [INFO][6066] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.564 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.566 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:09:36.573502 containerd[1541]: 2025-07-10 08:09:36.570 [INFO][6031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:09:36.582735 containerd[1541]: time="2025-07-10T08:09:36.574213330Z" level=info msg="TearDown network for sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" successfully" Jul 10 08:09:36.582735 containerd[1541]: time="2025-07-10T08:09:36.574256091Z" level=info msg="StopPodSandbox for \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" returns successfully" Jul 10 08:09:36.590637 systemd[1]: run-netns-cni\x2db3eaa407\x2dd318\x2d618b\x2d5101\x2d5709e5193664.mount: Deactivated successfully. Jul 10 08:09:36.647814 containerd[1541]: time="2025-07-10T08:09:36.647745046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"a12904f4403527534d9fb9694e0f987950ff4818069c9467acd7b84445f09f2a\" pid:6055 exited_at:{seconds:1752134976 nanos:647314467}" Jul 10 08:09:36.756771 kubelet[2824]: I0710 08:09:36.756704 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cfbfed2-71d2-4845-87fb-586f7e82aee0-calico-apiserver-certs\") pod \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\" (UID: \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\") " Jul 10 08:09:36.758796 kubelet[2824]: I0710 08:09:36.756797 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtkwc\" (UniqueName: \"kubernetes.io/projected/7cfbfed2-71d2-4845-87fb-586f7e82aee0-kube-api-access-xtkwc\") pod \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\" (UID: \"7cfbfed2-71d2-4845-87fb-586f7e82aee0\") " Jul 10 08:09:36.768926 systemd[1]: var-lib-kubelet-pods-7cfbfed2\x2d71d2\x2d4845\x2d87fb\x2d586f7e82aee0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtkwc.mount: Deactivated successfully. Jul 10 08:09:36.769124 systemd[1]: var-lib-kubelet-pods-7cfbfed2\x2d71d2\x2d4845\x2d87fb\x2d586f7e82aee0-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 08:09:36.770005 kubelet[2824]: I0710 08:09:36.769894 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfbfed2-71d2-4845-87fb-586f7e82aee0-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7cfbfed2-71d2-4845-87fb-586f7e82aee0" (UID: "7cfbfed2-71d2-4845-87fb-586f7e82aee0"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 08:09:36.771091 kubelet[2824]: I0710 08:09:36.770092 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfbfed2-71d2-4845-87fb-586f7e82aee0-kube-api-access-xtkwc" (OuterVolumeSpecName: "kube-api-access-xtkwc") pod "7cfbfed2-71d2-4845-87fb-586f7e82aee0" (UID: "7cfbfed2-71d2-4845-87fb-586f7e82aee0"). InnerVolumeSpecName "kube-api-access-xtkwc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 08:09:36.786761 kubelet[2824]: I0710 08:09:36.786708 2824 scope.go:117] "RemoveContainer" containerID="15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24" Jul 10 08:09:36.796071 containerd[1541]: time="2025-07-10T08:09:36.796001959Z" level=info msg="RemoveContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\"" Jul 10 08:09:36.808324 systemd[1]: Removed slice kubepods-besteffort-pod7cfbfed2_71d2_4845_87fb_586f7e82aee0.slice - libcontainer container kubepods-besteffort-pod7cfbfed2_71d2_4845_87fb_586f7e82aee0.slice. Jul 10 08:09:36.808445 systemd[1]: kubepods-besteffort-pod7cfbfed2_71d2_4845_87fb_586f7e82aee0.slice: Consumed 1.160s CPU time, 42.5M memory peak. Jul 10 08:09:36.858344 kubelet[2824]: I0710 08:09:36.858164 2824 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cfbfed2-71d2-4845-87fb-586f7e82aee0-calico-apiserver-certs\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:09:36.858344 kubelet[2824]: I0710 08:09:36.858202 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xtkwc\" (UniqueName: \"kubernetes.io/projected/7cfbfed2-71d2-4845-87fb-586f7e82aee0-kube-api-access-xtkwc\") on node \"ci-4391-0-0-n-29a01ddc69.novalocal\" DevicePath \"\"" Jul 10 08:09:36.877881 containerd[1541]: time="2025-07-10T08:09:36.877799850Z" level=info msg="RemoveContainer for \"15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24\" returns successfully" Jul 10 08:09:37.032031 sshd[6032]: Accepted publickey for core from 172.24.4.1 port 56950 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:09:37.045341 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:09:37.079867 systemd-logind[1499]: New session 12 of user core. Jul 10 08:09:37.089406 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 08:09:37.421503 kubelet[2824]: I0710 08:09:37.421409 2824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfbfed2-71d2-4845-87fb-586f7e82aee0" path="/var/lib/kubelet/pods/7cfbfed2-71d2-4845-87fb-586f7e82aee0/volumes" Jul 10 08:09:37.880885 sshd[6089]: Connection closed by 172.24.4.1 port 56950 Jul 10 08:09:37.884042 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jul 10 08:09:37.896692 systemd[1]: sshd@9-172.24.4.5:22-172.24.4.1:56950.service: Deactivated successfully. Jul 10 08:09:37.911804 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 08:09:37.915554 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. Jul 10 08:09:37.924448 systemd-logind[1499]: Removed session 12. Jul 10 08:09:42.908093 systemd[1]: Started sshd@10-172.24.4.5:22-172.24.4.1:56956.service - OpenSSH per-connection server daemon (172.24.4.1:56956). Jul 10 08:09:44.074990 sshd[6104]: Accepted publickey for core from 172.24.4.1 port 56956 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:09:44.078505 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:09:44.099132 systemd-logind[1499]: New session 13 of user core. Jul 10 08:09:44.107347 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 08:09:45.031241 sshd[6113]: Connection closed by 172.24.4.1 port 56956 Jul 10 08:09:45.026240 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Jul 10 08:09:45.037666 systemd[1]: sshd@10-172.24.4.5:22-172.24.4.1:56956.service: Deactivated successfully. Jul 10 08:09:45.039629 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. Jul 10 08:09:45.052520 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 08:09:45.070560 systemd-logind[1499]: Removed session 13. Jul 10 08:09:45.780594 containerd[1541]: time="2025-07-10T08:09:45.780467789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"197f206929234c5905c936bcdbc6cb8a6a7912ef3fc1debf0c758f008cf6329c\" pid:6138 exited_at:{seconds:1752134985 nanos:779806972}" Jul 10 08:09:50.057286 systemd[1]: Started sshd@11-172.24.4.5:22-172.24.4.1:60900.service - OpenSSH per-connection server daemon (172.24.4.1:60900). Jul 10 08:09:50.716686 containerd[1541]: time="2025-07-10T08:09:50.716435579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"409a23a669adad070c0a55d224660712bc9c8298a1bfbae48ac56c3202a96f83\" pid:6168 exited_at:{seconds:1752134990 nanos:715276802}" Jul 10 08:10:01.475014 sshd[6152]: Accepted publickey for core from 172.24.4.1 port 60900 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:10:01.478730 sshd-session[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:10:01.534557 systemd-logind[1499]: New session 14 of user core. Jul 10 08:10:01.543417 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 08:10:02.213507 sshd[6202]: Connection closed by 172.24.4.1 port 60900 Jul 10 08:10:02.213332 sshd-session[6152]: pam_unix(sshd:session): session closed for user core Jul 10 08:10:02.223139 systemd[1]: sshd@11-172.24.4.5:22-172.24.4.1:60900.service: Deactivated successfully. Jul 10 08:10:02.231917 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 08:10:02.235617 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. Jul 10 08:10:02.240499 systemd-logind[1499]: Removed session 14. Jul 10 08:10:05.974713 containerd[1541]: time="2025-07-10T08:10:05.974574274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"93a26772e8f228039ea27c533b58ee30370b5b1eea71bf0ff44439202647c432\" pid:6227 exited_at:{seconds:1752135005 nanos:972764565}" Jul 10 08:10:10.673800 systemd[1]: Started sshd@12-172.24.4.5:22-172.24.4.1:58752.service - OpenSSH per-connection server daemon (172.24.4.1:58752). Jul 10 08:10:13.497507 systemd[1]: cri-containerd-ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec.scope: Deactivated successfully. Jul 10 08:10:13.501919 systemd[1]: cri-containerd-ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec.scope: Consumed 2.984s CPU time, 60M memory peak, 4.2M read from disk. Jul 10 08:10:13.723697 containerd[1541]: time="2025-07-10T08:10:13.521277158Z" level=info msg="received exit event container_id:\"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" id:\"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" pid:5487 exit_status:1 exited_at:{seconds:1752135013 nanos:517262304}" Jul 10 08:10:13.723697 containerd[1541]: time="2025-07-10T08:10:13.523129209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" id:\"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" pid:5487 exit_status:1 exited_at:{seconds:1752135013 nanos:517262304}" Jul 10 08:10:13.723697 containerd[1541]: time="2025-07-10T08:10:13.586824511Z" level=info msg="received exit event container_id:\"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" id:\"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" pid:5474 exit_status:1 exited_at:{seconds:1752135013 nanos:585683597}" Jul 10 08:10:13.723697 containerd[1541]: time="2025-07-10T08:10:13.587277816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" id:\"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" pid:5474 exit_status:1 exited_at:{seconds:1752135013 nanos:585683597}" Jul 10 08:10:13.580623 systemd[1]: cri-containerd-e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2.scope: Deactivated successfully. Jul 10 08:10:13.581223 systemd[1]: cri-containerd-e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2.scope: Consumed 2.816s CPU time, 82.1M memory peak, 5.1M read from disk. Jul 10 08:10:13.766201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2-rootfs.mount: Deactivated successfully. Jul 10 08:10:13.807096 systemd[1]: cri-containerd-29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c.scope: Deactivated successfully. Jul 10 08:10:13.807570 systemd[1]: cri-containerd-29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c.scope: Consumed 2.011s CPU time, 20.2M memory peak, 1.3M read from disk. Jul 10 08:10:13.818935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec-rootfs.mount: Deactivated successfully. Jul 10 08:10:13.820825 containerd[1541]: time="2025-07-10T08:10:13.819652623Z" level=info msg="received exit event container_id:\"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" id:\"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" pid:5459 exit_status:1 exited_at:{seconds:1752135013 nanos:815966806}" Jul 10 08:10:13.821979 containerd[1541]: time="2025-07-10T08:10:13.820002861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" id:\"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" pid:5459 exit_status:1 exited_at:{seconds:1752135013 nanos:815966806}" Jul 10 08:10:13.863430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c-rootfs.mount: Deactivated successfully. Jul 10 08:10:14.447257 containerd[1541]: time="2025-07-10T08:10:14.447148448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"89b9720fa9fb891b7a5a07845c221bfcabf948ee2f96ce9b179d5eee911907a6\" pid:6294 exited_at:{seconds:1752135014 nanos:445717339}" Jul 10 08:10:15.150375 kubelet[2824]: I0710 08:10:15.150314 2824 scope.go:117] "RemoveContainer" containerID="898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31" Jul 10 08:10:15.156507 kubelet[2824]: I0710 08:10:15.156439 2824 scope.go:117] "RemoveContainer" containerID="29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" Jul 10 08:10:15.157768 kubelet[2824]: E0710 08:10:15.157723 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:10:15.161049 containerd[1541]: time="2025-07-10T08:10:15.160934331Z" level=info msg="RemoveContainer for \"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\"" Jul 10 08:10:15.164020 kubelet[2824]: I0710 08:10:15.163517 2824 scope.go:117] "RemoveContainer" containerID="ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec" Jul 10 08:10:15.164020 kubelet[2824]: E0710 08:10:15.163711 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(38962031e0206f3ff0de22fa27483fe0)\"" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="38962031e0206f3ff0de22fa27483fe0" Jul 10 08:10:15.178090 kubelet[2824]: I0710 08:10:15.177487 2824 scope.go:117] "RemoveContainer" containerID="e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2" Jul 10 08:10:15.178090 kubelet[2824]: E0710 08:10:15.177690 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:10:15.220824 sshd[6239]: Accepted publickey for core from 172.24.4.1 port 58752 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:10:15.221765 sshd-session[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:10:15.239074 systemd-logind[1499]: New session 15 of user core. Jul 10 08:10:15.247437 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 08:10:15.266063 containerd[1541]: time="2025-07-10T08:10:15.264393297Z" level=info msg="RemoveContainer for \"898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31\" returns successfully" Jul 10 08:10:15.268413 kubelet[2824]: I0710 08:10:15.267727 2824 scope.go:117] "RemoveContainer" containerID="06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c" Jul 10 08:10:15.284413 containerd[1541]: time="2025-07-10T08:10:15.284187523Z" level=info msg="RemoveContainer for \"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\"" Jul 10 08:10:15.360023 containerd[1541]: time="2025-07-10T08:10:15.359875716Z" level=info msg="RemoveContainer for \"06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c\" returns successfully" Jul 10 08:10:15.360860 kubelet[2824]: I0710 08:10:15.360696 2824 scope.go:117] "RemoveContainer" containerID="493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36" Jul 10 08:10:15.367497 containerd[1541]: time="2025-07-10T08:10:15.367414578Z" level=info msg="RemoveContainer for \"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\"" Jul 10 08:10:15.754626 containerd[1541]: time="2025-07-10T08:10:15.754573023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"1a2f4e34c35b50941f984387030203cd03471817e396b73d1ed64153912e299d\" pid:6317 exited_at:{seconds:1752135015 nanos:754114329}" Jul 10 08:10:16.189004 kubelet[2824]: I0710 08:10:16.188738 2824 scope.go:117] "RemoveContainer" containerID="29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" Jul 10 08:10:16.191722 kubelet[2824]: E0710 08:10:16.189752 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:10:16.825131 containerd[1541]: time="2025-07-10T08:10:16.825014254Z" level=info msg="RemoveContainer for \"493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\" returns successfully" Jul 10 08:10:19.665757 kubelet[2824]: I0710 08:10:19.665134 2824 scope.go:117] "RemoveContainer" containerID="ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec" Jul 10 08:10:19.665757 kubelet[2824]: E0710 08:10:19.665645 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(38962031e0206f3ff0de22fa27483fe0)\"" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="38962031e0206f3ff0de22fa27483fe0" Jul 10 08:10:20.552671 kubelet[2824]: I0710 08:10:20.552567 2824 scope.go:117] "RemoveContainer" containerID="29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" Jul 10 08:10:27.395583 kubelet[2824]: E0710 08:10:20.552914 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:10:27.396357 containerd[1541]: time="2025-07-10T08:10:20.688251847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"62a48689120c81d5b227bbd9efff9124b1e57f2dd4e18185baaa66258176774f\" pid:6343 exited_at:{seconds:1752135020 nanos:687395936}" Jul 10 08:10:27.403601 kubelet[2824]: I0710 08:10:22.433678 2824 status_manager.go:914] "Failed to update status for pod" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4732e9a2-026f-4c58-a99c-7c0b52405800\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [tigera-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:10:15Z\\\",\\\"message\\\":\\\"containers with unready status: [tigera-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\\\",\\\"image\\\":\\\"quay.io/tigera/operator:v1.38.3\\\",\\\"imageID\\\":\\\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-07-10T08:08:48Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-07-10T08:07:38Z\\\"}},\\\"name\\\":\\\"tigera-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-07-10T08:10:13Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-07-10T08:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/calico\\\",\\\"name\\\":\\\"var-lib-calico\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhqrd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"tigera-operator\"/\"tigera-operator-747864d56d-wxpk8\": etcdserver: request timed out" Jul 10 08:10:27.439997 kubelet[2824]: I0710 08:10:27.438255 2824 scope.go:117] "RemoveContainer" containerID="e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2" Jul 10 08:10:27.439997 kubelet[2824]: E0710 08:10:27.438573 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:10:27.486928 containerd[1541]: time="2025-07-10T08:10:27.486861296Z" level=info msg="StopPodSandbox for \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\"" Jul 10 08:10:29.714417 systemd[1]: Started sshd@13-172.24.4.5:22-172.24.4.1:37226.service - OpenSSH per-connection server daemon (172.24.4.1:37226). Jul 10 08:10:31.104274 containerd[1541]: time="2025-07-10T08:10:31.104161340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"dab1ba6f55a739c2dcce261cd71fc074ccc3de1eed0564b997e684c86e7a097e\" pid:6395 exited_at:{seconds:1752135031 nanos:103339292}" Jul 10 08:10:31.411251 kubelet[2824]: I0710 08:10:31.411148 2824 scope.go:117] "RemoveContainer" containerID="ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec" Jul 10 08:10:31.416884 kubelet[2824]: I0710 08:10:31.416824 2824 scope.go:117] "RemoveContainer" containerID="29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" Jul 10 08:10:33.654982 sshd[6304]: Connection closed by 172.24.4.1 port 58752 Jul 10 08:10:33.768759 sshd-session[6239]: pam_unix(sshd:session): session closed for user core Jul 10 08:10:33.774384 containerd[1541]: time="2025-07-10T08:10:33.774017573Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jul 10 08:10:33.778829 containerd[1541]: time="2025-07-10T08:10:33.778663845Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jul 10 08:10:33.787681 systemd[1]: sshd@12-172.24.4.5:22-172.24.4.1:58752.service: Deactivated successfully. Jul 10 08:10:33.797695 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 08:10:33.800845 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. Jul 10 08:10:33.805539 systemd-logind[1499]: Removed session 15. Jul 10 08:10:34.194347 containerd[1541]: time="2025-07-10T08:10:34.194278618Z" level=info msg="Container acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:10:34.226826 containerd[1541]: time="2025-07-10T08:10:34.226765554Z" level=info msg="Container 677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:33.975 [WARNING][6371] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:33.976 [INFO][6371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:33.976 [INFO][6371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" iface="eth0" netns="" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:33.976 [INFO][6371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:33.976 [INFO][6371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.057 [INFO][6411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.057 [INFO][6411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.057 [INFO][6411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.202 [WARNING][6411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.202 [INFO][6411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.231 [INFO][6411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:10:34.249338 containerd[1541]: 2025-07-10 08:10:34.244 [INFO][6371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.249938 containerd[1541]: time="2025-07-10T08:10:34.249459920Z" level=info msg="TearDown network for sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" successfully" Jul 10 08:10:34.249938 containerd[1541]: time="2025-07-10T08:10:34.249687065Z" level=info msg="StopPodSandbox for \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" returns successfully" Jul 10 08:10:34.252044 containerd[1541]: time="2025-07-10T08:10:34.251992685Z" level=info msg="RemovePodSandbox for \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\"" Jul 10 08:10:34.252174 containerd[1541]: time="2025-07-10T08:10:34.252083190Z" level=info msg="Forcibly stopping sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\"" Jul 10 08:10:34.390506 containerd[1541]: time="2025-07-10T08:10:34.390414021Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\"" Jul 10 08:10:34.398696 containerd[1541]: time="2025-07-10T08:10:34.398629451Z" level=info msg="StartContainer for \"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\"" Jul 10 08:10:34.403788 containerd[1541]: time="2025-07-10T08:10:34.403643505Z" level=info msg="connecting to shim acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" protocol=ttrpc version=3 Jul 10 08:10:34.440725 containerd[1541]: time="2025-07-10T08:10:34.440610952Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\"" Jul 10 08:10:34.441677 containerd[1541]: time="2025-07-10T08:10:34.441623210Z" level=info msg="StartContainer for \"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\"" Jul 10 08:10:34.443741 containerd[1541]: time="2025-07-10T08:10:34.443666721Z" level=info msg="connecting to shim 677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" protocol=ttrpc version=3 Jul 10 08:10:34.468374 systemd[1]: Started cri-containerd-acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5.scope - libcontainer container acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5. Jul 10 08:10:34.490290 systemd[1]: Started cri-containerd-677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c.scope - libcontainer container 677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c. Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.422 [WARNING][6425] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.423 [INFO][6425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.423 [INFO][6425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" iface="eth0" netns="" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.423 [INFO][6425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.423 [INFO][6425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.570 [INFO][6438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.570 [INFO][6438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.570 [INFO][6438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.607 [WARNING][6438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.639 [INFO][6438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" HandleID="k8s-pod-network.1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--6lgxs-eth0" Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.702 [INFO][6438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:10:34.709869 containerd[1541]: 2025-07-10 08:10:34.707 [INFO][6425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff" Jul 10 08:10:34.711069 containerd[1541]: time="2025-07-10T08:10:34.709939464Z" level=info msg="TearDown network for sandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" successfully" Jul 10 08:10:34.715149 containerd[1541]: time="2025-07-10T08:10:34.715084037Z" level=info msg="Ensure that sandbox 1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff in task-service has been cleanup successfully" Jul 10 08:10:34.752757 sshd[6379]: Accepted publickey for core from 172.24.4.1 port 37226 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:10:34.758015 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:10:34.771070 systemd-logind[1499]: New session 16 of user core. Jul 10 08:10:34.780211 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 08:10:34.799352 containerd[1541]: time="2025-07-10T08:10:34.799246013Z" level=info msg="StartContainer for \"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" returns successfully" Jul 10 08:10:34.802592 containerd[1541]: time="2025-07-10T08:10:34.802529358Z" level=info msg="StartContainer for \"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" returns successfully" Jul 10 08:10:34.830475 containerd[1541]: time="2025-07-10T08:10:34.830148286Z" level=info msg="RemovePodSandbox \"1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff\" returns successfully" Jul 10 08:10:34.833069 containerd[1541]: time="2025-07-10T08:10:34.832992600Z" level=info msg="StopPodSandbox for \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\"" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:34.965 [WARNING][6505] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:34.965 [INFO][6505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:34.965 [INFO][6505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" iface="eth0" netns="" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:34.965 [INFO][6505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:34.965 [INFO][6505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.021 [INFO][6514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.021 [INFO][6514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.021 [INFO][6514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.042 [WARNING][6514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.042 [INFO][6514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.046 [INFO][6514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:10:35.052607 containerd[1541]: 2025-07-10 08:10:35.050 [INFO][6505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.056065 containerd[1541]: time="2025-07-10T08:10:35.053115589Z" level=info msg="TearDown network for sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" successfully" Jul 10 08:10:35.056065 containerd[1541]: time="2025-07-10T08:10:35.053247471Z" level=info msg="StopPodSandbox for \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" returns successfully" Jul 10 08:10:35.056065 containerd[1541]: time="2025-07-10T08:10:35.054314981Z" level=info msg="RemovePodSandbox for \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\"" Jul 10 08:10:35.056065 containerd[1541]: time="2025-07-10T08:10:35.054347921Z" level=info msg="Forcibly stopping sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\"" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.186 [WARNING][6530] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" WorkloadEndpoint="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.188 [INFO][6530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.188 [INFO][6530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" iface="eth0" netns="" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.188 [INFO][6530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.188 [INFO][6530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.242 [INFO][6538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.242 [INFO][6538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.242 [INFO][6538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.255 [WARNING][6538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.255 [INFO][6538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" HandleID="k8s-pod-network.7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Workload="ci--4391--0--0--n--29a01ddc69.novalocal-k8s-calico--apiserver--698b6b4cc7--2glfs-eth0" Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.259 [INFO][6538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 08:10:35.264073 containerd[1541]: 2025-07-10 08:10:35.262 [INFO][6530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658" Jul 10 08:10:35.264632 containerd[1541]: time="2025-07-10T08:10:35.264144155Z" level=info msg="TearDown network for sandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" successfully" Jul 10 08:10:35.268723 containerd[1541]: time="2025-07-10T08:10:35.268684431Z" level=info msg="Ensure that sandbox 7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 in task-service has been cleanup successfully" Jul 10 08:10:35.548806 containerd[1541]: time="2025-07-10T08:10:35.548723729Z" level=info msg="RemovePodSandbox \"7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658\" returns successfully" Jul 10 08:10:35.982552 sshd[6493]: Connection closed by 172.24.4.1 port 37226 Jul 10 08:10:35.984536 sshd-session[6379]: pam_unix(sshd:session): session closed for user core Jul 10 08:10:35.997588 systemd[1]: sshd@13-172.24.4.5:22-172.24.4.1:37226.service: Deactivated successfully. Jul 10 08:10:36.000366 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 08:10:36.004562 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. Jul 10 08:10:36.010252 systemd[1]: Started sshd@14-172.24.4.5:22-172.24.4.1:56014.service - OpenSSH per-connection server daemon (172.24.4.1:56014). Jul 10 08:10:36.013298 systemd-logind[1499]: Removed session 16. Jul 10 08:10:36.106929 containerd[1541]: time="2025-07-10T08:10:36.106861258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"742d41fb2068c46b629ca5776f73a22b466e70dfc1eeb8d800d2ef866918ca9a\" pid:6561 exited_at:{seconds:1752135036 nanos:103006110}" Jul 10 08:10:37.648520 sshd[6575]: Accepted publickey for core from 172.24.4.1 port 56014 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:10:37.656346 sshd-session[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:10:37.672535 systemd-logind[1499]: New session 17 of user core. Jul 10 08:10:37.680166 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 08:10:38.411008 kubelet[2824]: I0710 08:10:38.410303 2824 scope.go:117] "RemoveContainer" containerID="e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2" Jul 10 08:10:38.422520 containerd[1541]: time="2025-07-10T08:10:38.422282087Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Jul 10 08:10:38.447203 containerd[1541]: time="2025-07-10T08:10:38.446676818Z" level=info msg="Container bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:10:38.461580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754075944.mount: Deactivated successfully. Jul 10 08:10:38.489157 containerd[1541]: time="2025-07-10T08:10:38.489108421Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\"" Jul 10 08:10:38.490115 containerd[1541]: time="2025-07-10T08:10:38.490090828Z" level=info msg="StartContainer for \"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\"" Jul 10 08:10:38.491765 containerd[1541]: time="2025-07-10T08:10:38.491724649Z" level=info msg="connecting to shim bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" protocol=ttrpc version=3 Jul 10 08:10:38.526139 systemd[1]: Started cri-containerd-bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f.scope - libcontainer container bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f. Jul 10 08:10:38.603649 containerd[1541]: time="2025-07-10T08:10:38.603543812Z" level=info msg="StartContainer for \"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\" returns successfully" Jul 10 08:10:38.618255 sshd[6579]: Connection closed by 172.24.4.1 port 56014 Jul 10 08:10:38.620259 sshd-session[6575]: pam_unix(sshd:session): session closed for user core Jul 10 08:10:38.625007 systemd[1]: sshd@14-172.24.4.5:22-172.24.4.1:56014.service: Deactivated successfully. Jul 10 08:10:38.628799 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 08:10:38.631588 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. Jul 10 08:10:38.634263 systemd-logind[1499]: Removed session 17. Jul 10 08:10:43.658629 systemd[1]: Started sshd@15-172.24.4.5:22-172.24.4.1:44490.service - OpenSSH per-connection server daemon (172.24.4.1:44490). Jul 10 08:10:46.049904 containerd[1541]: time="2025-07-10T08:10:46.049780048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"b917177822dc874c7a9bd1c216f940005480637466cc17f6456f8918a77a829c\" pid:6640 exited_at:{seconds:1752135046 nanos:48046051}" Jul 10 08:10:57.137121 kubelet[2824]: E0710 08:10:57.137055 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:11:00.010321 systemd[1]: cri-containerd-acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5.scope: Deactivated successfully. Jul 10 08:11:01.856068 containerd[1541]: time="2025-07-10T08:11:00.019104947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" pid:6459 exit_status:1 exited_at:{seconds:1752135060 nanos:16865212}" Jul 10 08:11:01.856068 containerd[1541]: time="2025-07-10T08:11:00.019431642Z" level=info msg="received exit event container_id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" pid:6459 exit_status:1 exited_at:{seconds:1752135060 nanos:16865212}" Jul 10 08:11:01.857224 kubelet[2824]: I0710 08:10:58.704831 2824 status_manager.go:890] "Failed to get status for pod" podUID="38962031e0206f3ff0de22fa27483fe0" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" err="etcdserver: request timed out" Jul 10 08:11:01.857224 kubelet[2824]: E0710 08:10:58.706456 2824 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal.1850d570eb1380a5 kube-system 1417 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal,UID:6bd47e81634a1fad90cea695d58949a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:08:56 +0000 UTC,LastTimestamp:2025-07-10 08:10:46.922284222 +0000 UTC m=+201.773569636,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:11:00.011971 systemd[1]: cri-containerd-acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5.scope: Consumed 1.525s CPU time, 19.1M memory peak, 676K read from disk. Jul 10 08:11:00.090470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5-rootfs.mount: Deactivated successfully. Jul 10 08:11:01.873177 containerd[1541]: time="2025-07-10T08:11:01.873081436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"e1c3f6fe33b466236612fc4802240f52543366c6fe1c7fa18a5d63edb58e4fc7\" pid:6663 exit_status:137 exited_at:{seconds:1752135061 nanos:860789011}" Jul 10 08:11:01.877972 containerd[1541]: time="2025-07-10T08:11:01.877837680Z" level=error msg="ExecSync for \"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 10 08:11:01.881984 kubelet[2824]: E0710 08:11:01.878388 2824 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6" cmd=["/usr/bin/check-status","-r"] Jul 10 08:11:02.755709 kubelet[2824]: E0710 08:11:02.755580 2824 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4391-0-0-n-29a01ddc69.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 10 08:11:05.963581 containerd[1541]: time="2025-07-10T08:11:05.963479649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"c77ce078b0525a59b551bb7e07f28691d7f3c702f954777c98e4422aa6043139\" pid:6708 exited_at:{seconds:1752135065 nanos:962676048}" Jul 10 08:11:09.773493 kubelet[2824]: E0710 08:11:09.773189 2824 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal.1850d5707212cc59 kube-system 1423 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal,UID:6bd47e81634a1fad90cea695d58949a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:08:53 +0000 UTC,LastTimestamp:2025-07-10 08:10:47.232172587 +0000 UTC m=+202.083458001,Count:16,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:11:09.810925 kubelet[2824]: I0710 08:11:09.810730 2824 status_manager.go:914] "Failed to update status for pod" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70602aec-7ace-4ed6-9034-fa4030ab919d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:10:53Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:10:53Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.32.6\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-07-10T08:10:13Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-07-10T08:09:10Z\\\"}},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-10T08:10:34Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal\": etcdserver: request timed out" Jul 10 08:11:10.020656 containerd[1541]: time="2025-07-10T08:11:10.020326506Z" level=error msg="failed to handle container TaskExit event container_id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" pid:6459 exit_status:1 exited_at:{seconds:1752135060 nanos:16865212}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:11:11.597980 containerd[1541]: time="2025-07-10T08:11:11.597827596Z" level=info msg="TaskExit event container_id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" id:\"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" pid:6459 exit_status:1 exited_at:{seconds:1752135060 nanos:16865212}" Jul 10 08:11:12.757139 kubelet[2824]: E0710 08:11:12.756814 2824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4391-0-0-n-29a01ddc69.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Jul 10 08:11:14.513407 sshd[6624]: Accepted publickey for core from 172.24.4.1 port 44490 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:11:14.056259 systemd-logind[1499]: New session 18 of user core. Jul 10 08:11:14.024163 sshd-session[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:13.599384092Z" level=error msg="get state for acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5" error="context deadline exceeded" Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:13.599616935Z" level=warning msg="unknown status" status=0 Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:14.408841984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"b3f2495e531f03d45e2a68b6d292ce810dce2e9242b56e6d96ecc3cfd7410a1c\" pid:6736 exit_status:1 exited_at:{seconds:1752135074 nanos:408244643}" Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:14.485317290Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:14.485457611Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 10 08:11:14.515595 containerd[1541]: time="2025-07-10T08:11:14.493292363Z" level=info msg="Ensure that container acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5 in task-service has been cleanup successfully" Jul 10 08:11:14.062519 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 08:11:15.157307 sshd[6722]: Connection closed by 172.24.4.1 port 44490 Jul 10 08:11:15.159192 sshd-session[6624]: pam_unix(sshd:session): session closed for user core Jul 10 08:11:15.164015 systemd[1]: sshd@15-172.24.4.5:22-172.24.4.1:44490.service: Deactivated successfully. Jul 10 08:11:15.169347 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 08:11:15.173163 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. Jul 10 08:11:15.175161 systemd-logind[1499]: Removed session 18. Jul 10 08:11:15.499422 kubelet[2824]: I0710 08:11:15.499054 2824 scope.go:117] "RemoveContainer" containerID="29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c" Jul 10 08:11:15.499422 kubelet[2824]: I0710 08:11:15.499277 2824 scope.go:117] "RemoveContainer" containerID="acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5" Jul 10 08:11:15.501434 kubelet[2824]: E0710 08:11:15.499573 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:11:15.507373 containerd[1541]: time="2025-07-10T08:11:15.507254104Z" level=info msg="RemoveContainer for \"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\"" Jul 10 08:11:15.643103 containerd[1541]: time="2025-07-10T08:11:15.643021738Z" level=info msg="RemoveContainer for \"29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c\" returns successfully" Jul 10 08:11:15.720365 containerd[1541]: time="2025-07-10T08:11:15.720257045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"83d3ff2d2f60728f41195c24225754488940d51e776fb2277d853fc0d805e3b9\" pid:6769 exited_at:{seconds:1752135075 nanos:719609560}" Jul 10 08:11:20.555936 kubelet[2824]: I0710 08:11:20.554509 2824 scope.go:117] "RemoveContainer" containerID="acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5" Jul 10 08:11:25.601218 systemd[1]: Started sshd@16-172.24.4.5:22-172.24.4.1:56590.service - OpenSSH per-connection server daemon (172.24.4.1:56590). Jul 10 08:11:33.191171 systemd[1]: cri-containerd-677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c.scope: Deactivated successfully. Jul 10 08:11:33.194690 systemd[1]: cri-containerd-677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c.scope: Consumed 2.402s CPU time, 51.7M memory peak, 1000K read from disk. Jul 10 08:11:33.305827 containerd[1541]: time="2025-07-10T08:11:33.305614478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" id:\"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" pid:6467 exit_status:1 exited_at:{seconds:1752135093 nanos:290011149}" Jul 10 08:11:33.386527 containerd[1541]: time="2025-07-10T08:11:33.305845892Z" level=info msg="received exit event container_id:\"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" id:\"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" pid:6467 exit_status:1 exited_at:{seconds:1752135093 nanos:290011149}" Jul 10 08:11:33.386527 containerd[1541]: time="2025-07-10T08:11:33.364189790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\" id:\"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\" pid:6599 exit_status:1 exited_at:{seconds:1752135093 nanos:363415071}" Jul 10 08:11:33.386527 containerd[1541]: time="2025-07-10T08:11:33.364266815Z" level=info msg="received exit event container_id:\"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\" id:\"bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f\" pid:6599 exit_status:1 exited_at:{seconds:1752135093 nanos:363415071}" Jul 10 08:11:33.358258 systemd[1]: cri-containerd-bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f.scope: Deactivated successfully. Jul 10 08:11:33.386836 kubelet[2824]: E0710 08:11:33.308052 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:11:33.386836 kubelet[2824]: I0710 08:11:33.314118 2824 status_manager.go:890] "Failed to get status for pod" podUID="8e6a146caca41331ef6aa6523967fb66" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" err="etcdserver: request timed out" Jul 10 08:11:33.386836 kubelet[2824]: E0710 08:11:33.313248 2824 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal.1850d5740bc07b0e kube-system 1422 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal,UID:8e6a146caca41331ef6aa6523967fb66,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-scheduler:v1.32.6\" already present on machine,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:09:09 +0000 UTC,LastTimestamp:2025-07-10 08:11:20.619769303 +0000 UTC m=+235.471054717,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:11:33.358903 systemd[1]: cri-containerd-bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f.scope: Consumed 1.660s CPU time, 73.3M memory peak, 836K read from disk. Jul 10 08:11:33.388535 containerd[1541]: time="2025-07-10T08:11:33.387511905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"79be107e5e538879dfcea8518c99b40ad54c65ce1542b04e2e6c61ad8b35d7f9\" pid:6802 exit_status:137 exited_at:{seconds:1752135093 nanos:385420423}" Jul 10 08:11:33.391380 containerd[1541]: time="2025-07-10T08:11:33.391290270Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jul 10 08:11:33.402998 containerd[1541]: time="2025-07-10T08:11:33.400988728Z" level=error msg="ExecSync for \"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 10 08:11:33.403183 kubelet[2824]: E0710 08:11:33.401326 2824 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6" cmd=["/usr/bin/check-status","-r"] Jul 10 08:11:33.506661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f-rootfs.mount: Deactivated successfully. Jul 10 08:11:33.525864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c-rootfs.mount: Deactivated successfully. Jul 10 08:11:33.594740 kubelet[2824]: E0710 08:11:33.594618 2824 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4391-0-0-n-29a01ddc69.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 10 08:11:33.655582 containerd[1541]: time="2025-07-10T08:11:33.655531718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"68f100eccf9bbae2d212d73cf64c7b66ceb8588dc0dea6162751fff782153c28\" pid:6846 exited_at:{seconds:1752135093 nanos:654021874}" Jul 10 08:11:34.010573 containerd[1541]: time="2025-07-10T08:11:34.010487935Z" level=info msg="Container 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:11:34.288600 kubelet[2824]: I0710 08:11:34.287798 2824 scope.go:117] "RemoveContainer" containerID="ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec" Jul 10 08:11:34.289835 kubelet[2824]: I0710 08:11:34.289531 2824 scope.go:117] "RemoveContainer" containerID="677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" Jul 10 08:11:34.290549 kubelet[2824]: E0710 08:11:34.290292 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(38962031e0206f3ff0de22fa27483fe0)\"" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="38962031e0206f3ff0de22fa27483fe0" Jul 10 08:11:34.302358 containerd[1541]: time="2025-07-10T08:11:34.302223549Z" level=info msg="RemoveContainer for \"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\"" Jul 10 08:11:34.307133 kubelet[2824]: I0710 08:11:34.306883 2824 scope.go:117] "RemoveContainer" containerID="bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" Jul 10 08:11:34.308062 kubelet[2824]: E0710 08:11:34.307911 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:11:34.315476 containerd[1541]: time="2025-07-10T08:11:34.314938654Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\"" Jul 10 08:11:34.316974 containerd[1541]: time="2025-07-10T08:11:34.316218428Z" level=info msg="StartContainer for \"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\"" Jul 10 08:11:34.318514 containerd[1541]: time="2025-07-10T08:11:34.318457586Z" level=info msg="connecting to shim 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" protocol=ttrpc version=3 Jul 10 08:11:34.356342 systemd[1]: Started cri-containerd-4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee.scope - libcontainer container 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee. Jul 10 08:11:34.361507 containerd[1541]: time="2025-07-10T08:11:34.361461718Z" level=info msg="RemoveContainer for \"ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec\" returns successfully" Jul 10 08:11:34.362830 kubelet[2824]: I0710 08:11:34.362801 2824 scope.go:117] "RemoveContainer" containerID="e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2" Jul 10 08:11:34.374345 containerd[1541]: time="2025-07-10T08:11:34.374300974Z" level=info msg="RemoveContainer for \"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\"" Jul 10 08:11:34.425066 containerd[1541]: time="2025-07-10T08:11:34.424999658Z" level=info msg="RemoveContainer for \"e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2\" returns successfully" Jul 10 08:11:34.512983 containerd[1541]: time="2025-07-10T08:11:34.512870935Z" level=info msg="StartContainer for \"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" returns successfully" Jul 10 08:11:35.912379 containerd[1541]: time="2025-07-10T08:11:35.912268460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"7c3b224539b73fdd7166887074cb7d21956d07fc3e7fadc837b83f0ed6c2c9e9\" pid:6921 exit_status:1 exited_at:{seconds:1752135095 nanos:911815152}" Jul 10 08:11:36.692391 sshd[6813]: Accepted publickey for core from 172.24.4.1 port 56590 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:11:36.696846 sshd-session[6813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:11:36.716232 systemd-logind[1499]: New session 19 of user core. Jul 10 08:11:36.726386 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 08:11:37.638799 sshd[6932]: Connection closed by 172.24.4.1 port 56590 Jul 10 08:11:37.640896 sshd-session[6813]: pam_unix(sshd:session): session closed for user core Jul 10 08:11:37.650772 systemd[1]: sshd@16-172.24.4.5:22-172.24.4.1:56590.service: Deactivated successfully. Jul 10 08:11:37.655660 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 08:11:37.658565 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. Jul 10 08:11:37.660408 systemd-logind[1499]: Removed session 19. Jul 10 08:11:39.664217 kubelet[2824]: I0710 08:11:39.664133 2824 scope.go:117] "RemoveContainer" containerID="677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" Jul 10 08:11:39.666233 kubelet[2824]: E0710 08:11:39.664374 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(38962031e0206f3ff0de22fa27483fe0)\"" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="38962031e0206f3ff0de22fa27483fe0" Jul 10 08:11:42.673023 systemd[1]: Started sshd@17-172.24.4.5:22-172.24.4.1:54658.service - OpenSSH per-connection server daemon (172.24.4.1:54658). Jul 10 08:11:43.955898 sshd[6956]: Accepted publickey for core from 172.24.4.1 port 54658 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:11:43.957874 sshd-session[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:11:43.968536 systemd-logind[1499]: New session 20 of user core. Jul 10 08:11:43.972158 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 08:11:44.686497 sshd[6959]: Connection closed by 172.24.4.1 port 54658 Jul 10 08:11:44.688852 sshd-session[6956]: pam_unix(sshd:session): session closed for user core Jul 10 08:11:44.703656 systemd[1]: sshd@17-172.24.4.5:22-172.24.4.1:54658.service: Deactivated successfully. Jul 10 08:11:44.715507 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 08:11:44.719697 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. Jul 10 08:11:44.727517 systemd-logind[1499]: Removed session 20. Jul 10 08:11:45.762157 containerd[1541]: time="2025-07-10T08:11:45.762072207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"545788557ba74305e18adfd4abb69ec878e9608ca7002ab82b6dc74c445dc7a7\" pid:6982 exited_at:{seconds:1752135105 nanos:744267231}" Jul 10 08:11:46.411123 kubelet[2824]: I0710 08:11:46.410872 2824 scope.go:117] "RemoveContainer" containerID="bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" Jul 10 08:11:46.413870 kubelet[2824]: E0710 08:11:46.413555 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:11:49.721651 systemd[1]: Started sshd@18-172.24.4.5:22-172.24.4.1:54358.service - OpenSSH per-connection server daemon (172.24.4.1:54358). Jul 10 08:11:50.721029 containerd[1541]: time="2025-07-10T08:11:50.720250044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"cab9985f8fc37654f35617268655fed08495aa436c62d36ed6a61c96db9028e4\" pid:7010 exited_at:{seconds:1752135110 nanos:719569096}" Jul 10 08:11:51.051590 sshd[6995]: Accepted publickey for core from 172.24.4.1 port 54358 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:11:51.055456 sshd-session[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:11:51.068941 systemd-logind[1499]: New session 21 of user core. Jul 10 08:11:51.081338 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 08:11:52.018585 sshd[7020]: Connection closed by 172.24.4.1 port 54358 Jul 10 08:11:52.022286 sshd-session[6995]: pam_unix(sshd:session): session closed for user core Jul 10 08:11:52.054874 systemd[1]: sshd@18-172.24.4.5:22-172.24.4.1:54358.service: Deactivated successfully. Jul 10 08:11:52.067655 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 08:11:52.070406 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. Jul 10 08:11:52.076345 systemd-logind[1499]: Removed session 21. Jul 10 08:11:52.412645 kubelet[2824]: I0710 08:11:52.412592 2824 scope.go:117] "RemoveContainer" containerID="677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" Jul 10 08:11:52.413994 kubelet[2824]: E0710 08:11:52.413487 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(38962031e0206f3ff0de22fa27483fe0)\"" pod="kube-system/kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="38962031e0206f3ff0de22fa27483fe0" Jul 10 08:11:57.070792 systemd[1]: Started sshd@19-172.24.4.5:22-172.24.4.1:58900.service - OpenSSH per-connection server daemon (172.24.4.1:58900). Jul 10 08:11:58.350115 sshd[7031]: Accepted publickey for core from 172.24.4.1 port 58900 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:11:58.354403 sshd-session[7031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:11:58.372104 systemd-logind[1499]: New session 22 of user core. Jul 10 08:11:58.390390 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 08:11:59.299737 sshd[7034]: Connection closed by 172.24.4.1 port 58900 Jul 10 08:11:59.301331 sshd-session[7031]: pam_unix(sshd:session): session closed for user core Jul 10 08:11:59.311265 systemd[1]: sshd@19-172.24.4.5:22-172.24.4.1:58900.service: Deactivated successfully. Jul 10 08:11:59.329599 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 08:11:59.332934 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. Jul 10 08:11:59.338553 systemd-logind[1499]: Removed session 22. Jul 10 08:12:01.411683 kubelet[2824]: I0710 08:12:01.411382 2824 scope.go:117] "RemoveContainer" containerID="bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" Jul 10 08:12:01.415797 kubelet[2824]: E0710 08:12:01.412806 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:12:04.340496 systemd[1]: Started sshd@20-172.24.4.5:22-172.24.4.1:52626.service - OpenSSH per-connection server daemon (172.24.4.1:52626). Jul 10 08:12:05.581126 sshd[7048]: Accepted publickey for core from 172.24.4.1 port 52626 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:05.584825 sshd-session[7048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:05.600889 systemd-logind[1499]: New session 23 of user core. Jul 10 08:12:05.616443 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 08:12:06.019591 containerd[1541]: time="2025-07-10T08:12:06.019434810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"a216170e5453dbd481cebf233bf5e83723e14732eee9639608f2cf7688b7a926\" pid:7064 exited_at:{seconds:1752135126 nanos:18708182}" Jul 10 08:12:06.386001 sshd[7051]: Connection closed by 172.24.4.1 port 52626 Jul 10 08:12:06.387173 sshd-session[7048]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:06.392306 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. Jul 10 08:12:06.393006 systemd[1]: sshd@20-172.24.4.5:22-172.24.4.1:52626.service: Deactivated successfully. Jul 10 08:12:06.395843 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 08:12:06.399789 systemd-logind[1499]: Removed session 23. Jul 10 08:12:06.410330 kubelet[2824]: I0710 08:12:06.410280 2824 scope.go:117] "RemoveContainer" containerID="677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" Jul 10 08:12:06.416575 containerd[1541]: time="2025-07-10T08:12:06.416490930Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jul 10 08:12:06.451123 containerd[1541]: time="2025-07-10T08:12:06.447534988Z" level=info msg="Container fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:12:06.491133 containerd[1541]: time="2025-07-10T08:12:06.491064122Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\"" Jul 10 08:12:06.492349 containerd[1541]: time="2025-07-10T08:12:06.492255594Z" level=info msg="StartContainer for \"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\"" Jul 10 08:12:06.495321 containerd[1541]: time="2025-07-10T08:12:06.495272752Z" level=info msg="connecting to shim fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" protocol=ttrpc version=3 Jul 10 08:12:06.534154 systemd[1]: Started cri-containerd-fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395.scope - libcontainer container fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395. Jul 10 08:12:06.619963 containerd[1541]: time="2025-07-10T08:12:06.619814404Z" level=info msg="StartContainer for \"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" returns successfully" Jul 10 08:12:11.429514 systemd[1]: Started sshd@21-172.24.4.5:22-172.24.4.1:52636.service - OpenSSH per-connection server daemon (172.24.4.1:52636). Jul 10 08:12:12.411984 kubelet[2824]: I0710 08:12:12.411382 2824 scope.go:117] "RemoveContainer" containerID="bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" Jul 10 08:12:12.414152 kubelet[2824]: E0710 08:12:12.412166 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-wxpk8_tigera-operator(4732e9a2-026f-4c58-a99c-7c0b52405800)\"" pod="tigera-operator/tigera-operator-747864d56d-wxpk8" podUID="4732e9a2-026f-4c58-a99c-7c0b52405800" Jul 10 08:12:12.768217 sshd[7115]: Accepted publickey for core from 172.24.4.1 port 52636 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:12.772940 sshd-session[7115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:12.791168 systemd-logind[1499]: New session 24 of user core. Jul 10 08:12:12.799357 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 08:12:13.630841 sshd[7118]: Connection closed by 172.24.4.1 port 52636 Jul 10 08:12:13.630697 sshd-session[7115]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:13.639693 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. Jul 10 08:12:13.640764 systemd[1]: sshd@21-172.24.4.5:22-172.24.4.1:52636.service: Deactivated successfully. Jul 10 08:12:13.650430 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 08:12:13.655244 systemd-logind[1499]: Removed session 24. Jul 10 08:12:14.473592 containerd[1541]: time="2025-07-10T08:12:14.467584372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"338f89821816e205cf578d0eea62a0512e8d0387f6830e609a212480716fa1ad\" pid:7140 exited_at:{seconds:1752135134 nanos:463248684}" Jul 10 08:12:15.753710 containerd[1541]: time="2025-07-10T08:12:15.753642839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"8f5749ea95f132082b4474958b3946499bbae148e05eb31a244d9a60f0ae1eaf\" pid:7163 exited_at:{seconds:1752135135 nanos:752108369}" Jul 10 08:12:17.834043 containerd[1541]: time="2025-07-10T08:12:17.833842359Z" level=warning msg="container event discarded" container=7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003 type=CONTAINER_CREATED_EVENT Jul 10 08:12:17.845534 containerd[1541]: time="2025-07-10T08:12:17.845359443Z" level=warning msg="container event discarded" container=7175bb657a9649d7d1c07815126ea11f4435398e104f7237830d3a64321d9003 type=CONTAINER_STARTED_EVENT Jul 10 08:12:17.845534 containerd[1541]: time="2025-07-10T08:12:17.845449462Z" level=warning msg="container event discarded" container=b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2 type=CONTAINER_CREATED_EVENT Jul 10 08:12:17.845534 containerd[1541]: time="2025-07-10T08:12:17.845473317Z" level=warning msg="container event discarded" container=b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2 type=CONTAINER_STARTED_EVENT Jul 10 08:12:17.894774 containerd[1541]: time="2025-07-10T08:12:17.894701248Z" level=warning msg="container event discarded" container=f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2 type=CONTAINER_CREATED_EVENT Jul 10 08:12:17.894774 containerd[1541]: time="2025-07-10T08:12:17.894753187Z" level=warning msg="container event discarded" container=f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2 type=CONTAINER_STARTED_EVENT Jul 10 08:12:17.894774 containerd[1541]: time="2025-07-10T08:12:17.894763987Z" level=warning msg="container event discarded" container=212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2 type=CONTAINER_CREATED_EVENT Jul 10 08:12:17.894774 containerd[1541]: time="2025-07-10T08:12:17.894773034Z" level=warning msg="container event discarded" container=898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31 type=CONTAINER_CREATED_EVENT Jul 10 08:12:17.938107 containerd[1541]: time="2025-07-10T08:12:17.937762228Z" level=warning msg="container event discarded" container=06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c type=CONTAINER_CREATED_EVENT Jul 10 08:12:18.064277 containerd[1541]: time="2025-07-10T08:12:18.064178298Z" level=warning msg="container event discarded" container=898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31 type=CONTAINER_STARTED_EVENT Jul 10 08:12:18.079771 containerd[1541]: time="2025-07-10T08:12:18.079620734Z" level=warning msg="container event discarded" container=212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2 type=CONTAINER_STARTED_EVENT Jul 10 08:12:18.148499 containerd[1541]: time="2025-07-10T08:12:18.148151908Z" level=warning msg="container event discarded" container=06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c type=CONTAINER_STARTED_EVENT Jul 10 08:12:18.653850 systemd[1]: Started sshd@22-172.24.4.5:22-172.24.4.1:57276.service - OpenSSH per-connection server daemon (172.24.4.1:57276). Jul 10 08:12:19.931730 sshd[7176]: Accepted publickey for core from 172.24.4.1 port 57276 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:19.935367 sshd-session[7176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:19.951584 systemd-logind[1499]: New session 25 of user core. Jul 10 08:12:19.959308 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 08:12:20.695012 containerd[1541]: time="2025-07-10T08:12:20.694842240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"40e1781a6b559f8af7896013f852aaaeaff019b3e1c0e715d8dde573cf889feb\" pid:7200 exited_at:{seconds:1752135140 nanos:693981137}" Jul 10 08:12:20.756509 sshd[7179]: Connection closed by 172.24.4.1 port 57276 Jul 10 08:12:20.757463 sshd-session[7176]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:20.771782 systemd[1]: sshd@22-172.24.4.5:22-172.24.4.1:57276.service: Deactivated successfully. Jul 10 08:12:20.776510 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 08:12:20.780341 systemd-logind[1499]: Session 25 logged out. Waiting for processes to exit. Jul 10 08:12:20.783372 systemd-logind[1499]: Removed session 25. Jul 10 08:12:24.410633 kubelet[2824]: I0710 08:12:24.410143 2824 scope.go:117] "RemoveContainer" containerID="bbcf199ebdace02b4911d6f39bfbb41f5d69afccdcf1a16a2a325fd8cca2c84f" Jul 10 08:12:24.415201 containerd[1541]: time="2025-07-10T08:12:24.415135207Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:4,}" Jul 10 08:12:24.476848 containerd[1541]: time="2025-07-10T08:12:24.476734967Z" level=info msg="Container f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:12:24.499693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671766997.mount: Deactivated successfully. Jul 10 08:12:24.514275 containerd[1541]: time="2025-07-10T08:12:24.514217550Z" level=info msg="CreateContainer within sandbox \"83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6\" for &ContainerMetadata{Name:tigera-operator,Attempt:4,} returns container id \"f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0\"" Jul 10 08:12:24.516210 containerd[1541]: time="2025-07-10T08:12:24.515890435Z" level=info msg="StartContainer for \"f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0\"" Jul 10 08:12:24.518333 containerd[1541]: time="2025-07-10T08:12:24.518186936Z" level=info msg="connecting to shim f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0" address="unix:///run/containerd/s/4a621fafdac6b908a1bd19fb006eb1f6a38bed52ae649271397457c076b82963" protocol=ttrpc version=3 Jul 10 08:12:24.557158 systemd[1]: Started cri-containerd-f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0.scope - libcontainer container f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0. Jul 10 08:12:24.610191 containerd[1541]: time="2025-07-10T08:12:24.610139583Z" level=info msg="StartContainer for \"f2a9edc91690d1f928552ee713bedd222cd0a836f7fb7fecc26560fa8a9f80a0\" returns successfully" Jul 10 08:12:25.775274 systemd[1]: Started sshd@23-172.24.4.5:22-172.24.4.1:54282.service - OpenSSH per-connection server daemon (172.24.4.1:54282). Jul 10 08:12:27.103718 sshd[7252]: Accepted publickey for core from 172.24.4.1 port 54282 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:27.106363 sshd-session[7252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:27.123366 systemd-logind[1499]: New session 26 of user core. Jul 10 08:12:27.133501 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 08:12:27.770439 sshd[7256]: Connection closed by 172.24.4.1 port 54282 Jul 10 08:12:27.774039 sshd-session[7252]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:27.796929 systemd[1]: sshd@23-172.24.4.5:22-172.24.4.1:54282.service: Deactivated successfully. Jul 10 08:12:27.806903 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 08:12:27.811075 systemd-logind[1499]: Session 26 logged out. Waiting for processes to exit. Jul 10 08:12:27.822082 systemd-logind[1499]: Removed session 26. Jul 10 08:12:27.826607 systemd[1]: Started sshd@24-172.24.4.5:22-172.24.4.1:54298.service - OpenSSH per-connection server daemon (172.24.4.1:54298). Jul 10 08:12:29.101176 sshd[7268]: Accepted publickey for core from 172.24.4.1 port 54298 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:29.106217 sshd-session[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:29.146206 systemd-logind[1499]: New session 27 of user core. Jul 10 08:12:29.162365 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 08:12:30.498611 containerd[1541]: time="2025-07-10T08:12:30.497878250Z" level=warning msg="container event discarded" container=6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c type=CONTAINER_CREATED_EVENT Jul 10 08:12:30.498611 containerd[1541]: time="2025-07-10T08:12:30.498591325Z" level=warning msg="container event discarded" container=6becf9c2c2fb15f87f7fa26bccb586f0b2a7fc355d7dcd7487f2a78509b3c83c type=CONTAINER_STARTED_EVENT Jul 10 08:12:30.540362 containerd[1541]: time="2025-07-10T08:12:30.540113997Z" level=warning msg="container event discarded" container=83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6 type=CONTAINER_CREATED_EVENT Jul 10 08:12:30.540362 containerd[1541]: time="2025-07-10T08:12:30.540217322Z" level=warning msg="container event discarded" container=83e1964542d4294c46b7b8320377930353bf359abd94ba77da28dbe8cce1e7e6 type=CONTAINER_STARTED_EVENT Jul 10 08:12:30.540362 containerd[1541]: time="2025-07-10T08:12:30.540299788Z" level=warning msg="container event discarded" container=f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe type=CONTAINER_CREATED_EVENT Jul 10 08:12:30.626251 containerd[1541]: time="2025-07-10T08:12:30.626102766Z" level=warning msg="container event discarded" container=f4c8cd94cdb0b4b12048ada4c34f9dd3cffc227ab49c00e72f9e6ced04f1d0fe type=CONTAINER_STARTED_EVENT Jul 10 08:12:31.146814 containerd[1541]: time="2025-07-10T08:12:31.146758276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"c828f8a711762f9d17c079fb7a9e42b9d789854237bccfe02320e918fefa5df9\" pid:7293 exited_at:{seconds:1752135151 nanos:145792734}" Jul 10 08:12:31.330203 sshd[7271]: Connection closed by 172.24.4.1 port 54298 Jul 10 08:12:31.329475 sshd-session[7268]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:31.339426 systemd[1]: sshd@24-172.24.4.5:22-172.24.4.1:54298.service: Deactivated successfully. Jul 10 08:12:31.344284 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 08:12:31.346791 systemd-logind[1499]: Session 27 logged out. Waiting for processes to exit. Jul 10 08:12:31.350180 systemd[1]: Started sshd@25-172.24.4.5:22-172.24.4.1:54308.service - OpenSSH per-connection server daemon (172.24.4.1:54308). Jul 10 08:12:31.353160 systemd-logind[1499]: Removed session 27. Jul 10 08:12:32.529112 sshd[7308]: Accepted publickey for core from 172.24.4.1 port 54308 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:32.531817 sshd-session[7308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:32.554666 systemd-logind[1499]: New session 28 of user core. Jul 10 08:12:32.565817 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 08:12:33.469641 containerd[1541]: time="2025-07-10T08:12:33.469453135Z" level=warning msg="container event discarded" container=d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02 type=CONTAINER_CREATED_EVENT Jul 10 08:12:33.669595 containerd[1541]: time="2025-07-10T08:12:33.669408260Z" level=warning msg="container event discarded" container=d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02 type=CONTAINER_STARTED_EVENT Jul 10 08:12:35.086988 sshd[7311]: Connection closed by 172.24.4.1 port 54308 Jul 10 08:12:35.088193 sshd-session[7308]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:35.104775 systemd[1]: sshd@25-172.24.4.5:22-172.24.4.1:54308.service: Deactivated successfully. Jul 10 08:12:35.108666 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 08:12:35.110887 systemd-logind[1499]: Session 28 logged out. Waiting for processes to exit. Jul 10 08:12:35.115156 systemd-logind[1499]: Removed session 28. Jul 10 08:12:35.116428 systemd[1]: Started sshd@26-172.24.4.5:22-172.24.4.1:43580.service - OpenSSH per-connection server daemon (172.24.4.1:43580). Jul 10 08:12:35.903708 containerd[1541]: time="2025-07-10T08:12:35.903574212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"199b76d0c69cf8afc1960960ce869dff369e78b1a381be0f27dadc76f0a1938e\" pid:7344 exited_at:{seconds:1752135155 nanos:902711233}" Jul 10 08:12:36.569258 sshd[7328]: Accepted publickey for core from 172.24.4.1 port 43580 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:36.572551 sshd-session[7328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:36.586745 systemd-logind[1499]: New session 29 of user core. Jul 10 08:12:36.599596 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 08:12:37.578134 sshd[7358]: Connection closed by 172.24.4.1 port 43580 Jul 10 08:12:37.579767 sshd-session[7328]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:37.594399 systemd[1]: sshd@26-172.24.4.5:22-172.24.4.1:43580.service: Deactivated successfully. Jul 10 08:12:37.599333 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 08:12:37.601082 systemd-logind[1499]: Session 29 logged out. Waiting for processes to exit. Jul 10 08:12:37.605826 systemd[1]: Started sshd@27-172.24.4.5:22-172.24.4.1:43590.service - OpenSSH per-connection server daemon (172.24.4.1:43590). Jul 10 08:12:37.610475 systemd-logind[1499]: Removed session 29. Jul 10 08:12:38.587260 containerd[1541]: time="2025-07-10T08:12:38.586944766Z" level=warning msg="container event discarded" container=d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02 type=CONTAINER_STOPPED_EVENT Jul 10 08:12:38.661670 containerd[1541]: time="2025-07-10T08:12:38.661490944Z" level=warning msg="container event discarded" container=493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36 type=CONTAINER_CREATED_EVENT Jul 10 08:12:38.716218 sshd[7368]: Accepted publickey for core from 172.24.4.1 port 43590 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:12:38.719182 sshd-session[7368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:12:38.730041 systemd-logind[1499]: New session 30 of user core. Jul 10 08:12:38.736225 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 08:12:38.773136 containerd[1541]: time="2025-07-10T08:12:38.772996674Z" level=warning msg="container event discarded" container=493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36 type=CONTAINER_STARTED_EVENT Jul 10 08:12:39.679235 sshd[7371]: Connection closed by 172.24.4.1 port 43590 Jul 10 08:12:39.681106 sshd-session[7368]: pam_unix(sshd:session): session closed for user core Jul 10 08:12:39.692567 systemd[1]: sshd@27-172.24.4.5:22-172.24.4.1:43590.service: Deactivated successfully. Jul 10 08:12:39.699476 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 08:12:39.703483 systemd-logind[1499]: Session 30 logged out. Waiting for processes to exit. Jul 10 08:12:39.707825 systemd-logind[1499]: Removed session 30. Jul 10 08:12:45.762564 containerd[1541]: time="2025-07-10T08:12:45.762435179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"90161851e40c2fd1aab99de923661c16bdb1f9ee17675b691457f4d4b0fe2c27\" pid:7395 exited_at:{seconds:1752135165 nanos:760404311}" Jul 10 08:12:47.476580 systemd[1]: Started sshd@28-172.24.4.5:22-172.24.4.1:52256.service - OpenSSH per-connection server daemon (172.24.4.1:52256). Jul 10 08:12:48.710901 containerd[1541]: time="2025-07-10T08:12:48.710552098Z" level=warning msg="container event discarded" container=2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025 type=CONTAINER_CREATED_EVENT Jul 10 08:12:48.710901 containerd[1541]: time="2025-07-10T08:12:48.710822319Z" level=warning msg="container event discarded" container=2a22272d74760cbc68cd179fc508e6793ccff39ef4df2648d8c546bfa9838025 type=CONTAINER_STARTED_EVENT Jul 10 08:12:48.876308 containerd[1541]: time="2025-07-10T08:12:48.876153773Z" level=warning msg="container event discarded" container=02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b type=CONTAINER_CREATED_EVENT Jul 10 08:12:48.876308 containerd[1541]: time="2025-07-10T08:12:48.876266356Z" level=warning msg="container event discarded" container=02f7c23f03074de767b4724d1ca7768567ce018164f4656508a181860f280c8b type=CONTAINER_STARTED_EVENT Jul 10 08:12:50.725628 containerd[1541]: time="2025-07-10T08:12:50.725462038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"797ad754a4babfeebbfcb488e9e6b2257ca38ac952322106fde3ba35d6e33a94\" pid:7422 exit_status:1 exited_at:{seconds:1752135170 nanos:725039739}" Jul 10 08:13:18.376727 kubelet[2824]: E0710 08:12:52.251722 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:13:18.376727 kubelet[2824]: E0710 08:13:00.414575 2824 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event=< Jul 10 08:13:18.376727 kubelet[2824]: &Event{ObjectMeta:{calico-kube-controllers-6cd68b8fff-mshq4.1850d5a791c78e36 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-6cd68b8fff-mshq4,UID:ebfcfa0b-3df6-4671-b7ec-2f40d76fc497,APIVersion:v1,ResourceVersion:826,FieldPath:spec.containers{calico-kube-controllers},},Reason:Unhealthy,Message:Readiness probe failed: Error verifying datastore: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded Jul 10 08:13:18.376727 kubelet[2824]: ,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:12:50.729152054 +0000 UTC m=+325.580437458,LastTimestamp:2025-07-10 08:12:50.729152054 +0000 UTC m=+325.580437458,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,} Jul 10 08:13:18.376727 kubelet[2824]: > Jul 10 08:13:18.376727 kubelet[2824]: E0710 08:13:00.424397 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:13:18.376727 kubelet[2824]: E0710 08:13:07.456796 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:12:54.184722 systemd[1]: cri-containerd-fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395.scope: Deactivated successfully. Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:52.950584512Z" level=warning msg="container event discarded" container=d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3 type=CONTAINER_CREATED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:53.097347633Z" level=warning msg="container event discarded" container=d843e8521fabdd899e62655568333302a8bfc6366acc44ea108f45e513048de3 type=CONTAINER_STARTED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:54.194133660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167}" Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:54.195446502Z" level=info msg="received exit event container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167}" Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:55.200459896Z" level=warning msg="container event discarded" container=bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba type=CONTAINER_CREATED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:55.329851140Z" level=warning msg="container event discarded" container=bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba type=CONTAINER_STARTED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:12:55.869897026Z" level=warning msg="container event discarded" container=bcb6f93767ed07459bf8da028a9f6b9002bc10b19faffae3d9974727a2a8d7ba type=CONTAINER_STOPPED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:13:02.766474982Z" level=warning msg="container event discarded" container=31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25 type=CONTAINER_CREATED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:13:02.886473072Z" level=warning msg="container event discarded" container=31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25 type=CONTAINER_STARTED_EVENT Jul 10 08:13:18.393196 containerd[1541]: time="2025-07-10T08:13:05.803945317Z" level=warning msg="container event discarded" container=31c6c29fc51f2a5e40281820148e5f027eb341aa7f5ee6610cb7ea8f9872fc25 type=CONTAINER_STOPPED_EVENT Jul 10 08:12:54.185417 systemd[1]: cri-containerd-fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395.scope: Consumed 2.365s CPU time, 51.4M memory peak, 484K read from disk. Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.746802 1500 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.747153 1500 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.748867 1500 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.833482 1500 omaha_request_params.cc:62] Current group set to developer Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.835299 1500 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.835340 1500 update_attempter.cc:643] Scheduling an action processor start. Jul 10 08:13:20.843533 update_engine[1500]: I20250710 08:13:20.835397 1500 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.444547590Z" level=info msg="received exit event container_id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" pid:6883 exit_status:1 exited_at:{seconds:1752135198 nanos:426913071}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.446449321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" pid:6883 exit_status:1 exited_at:{seconds:1752135198 nanos:426913071}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.447812271Z" level=error msg="failed to handle container TaskExit event container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.662410772Z" level=warning msg="container event discarded" container=88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310 type=CONTAINER_CREATED_EVENT Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.807538194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"4efe07027ff324f6ef6b93ec1815de089a7c6c858e324e7f9637952b51f37b83\" pid:7516 exited_at:{seconds:1752135198 nanos:806685819}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.823163019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"a206d836f80647f15dc29119743e26e14be878687cb1e7550fe9209cda55f191\" pid:7495 exited_at:{seconds:1752135198 nanos:822649255}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:18.845535780Z" level=warning msg="container event discarded" container=88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310 type=CONTAINER_STARTED_EVENT Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:19.598677466Z" level=info msg="TaskExit event container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:20.841462514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"bf14a04679e8abc2aef86b5d0337051bf466e4fa03f4b60fb9050a73bbf8040d\" pid:7552 exit_status:1 exited_at:{seconds:1752135200 nanos:836507881}" Jul 10 08:13:20.848878 containerd[1541]: time="2025-07-10T08:13:20.847586600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"8faed1055a4686e41d7ad40050508162c503e3ab464f1147c50c29cd7c382a14\" pid:7514 exit_status:1 exited_at:{seconds:1752135200 nanos:844771499}" Jul 10 08:13:20.851812 kubelet[2824]: E0710 08:13:18.463997 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:13:18.283103 systemd[1]: cri-containerd-4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee.scope: Deactivated successfully. Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.239708940Z" level=warning msg="container event discarded" container=7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.239857661Z" level=warning msg="container event discarded" container=7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.601142878Z" level=error msg="get state for fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" error="context deadline exceeded" Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.601335392Z" level=warning msg="unknown status" status=0 Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.893773313Z" level=warning msg="container event discarded" container=84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:21.894060196Z" level=warning msg="container event discarded" container=84c426796f9d1836497832443fa7afc6fbea6bf028598b29ed00e7012b2f7263 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.376751616Z" level=warning msg="container event discarded" container=1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.376904145Z" level=warning msg="container event discarded" container=1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.424498552Z" level=warning msg="container event discarded" container=7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.424610975Z" level=warning msg="container event discarded" container=7007000743a72831b9da71e5cb77a4d1ee17b09f1b0b3c1e9751c9fee4d2ff08 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.467012999Z" level=warning msg="container event discarded" container=c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.467132977Z" level=warning msg="container event discarded" container=c0410318bb10e5c7f5897db1a2672f766b8ae9e37c8cf3578556449e6a5ffd50 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.605165531Z" level=error msg="get state for fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" error="context deadline exceeded" Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:23.605232167Z" level=warning msg="unknown status" status=0 Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:25.609696310Z" level=error msg="get state for fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" error="context deadline exceeded" Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:25.609800949Z" level=warning msg="unknown status" status=0 Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:27.524830711Z" level=warning msg="container event discarded" container=b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:27.656144261Z" level=warning msg="container event discarded" container=b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:28.445503151Z" level=error msg="failed to handle container TaskExit event container_id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" pid:6883 exit_status:1 exited_at:{seconds:1752135198 nanos:426913071}" error="failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.351281079Z" level=warning msg="container event discarded" container=137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.351382852Z" level=warning msg="container event discarded" container=137c7d9f91f4efc85f20dcce2493bb909dee5f451b82900c015c58f749d33308 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.434916472Z" level=warning msg="container event discarded" container=ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8 type=CONTAINER_CREATED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.581607954Z" level=warning msg="container event discarded" container=ab2381d3350dd9bb9145b26c5bddf3b772fe3b1d6fbc585e92f2efc4f8c36ff8 type=CONTAINER_STARTED_EVENT Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.598880964Z" level=error msg="Failed to handle backOff event container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167} for fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Jul 10 08:13:29.974272 containerd[1541]: time="2025-07-10T08:13:29.599150406Z" level=info msg="TaskExit event container_id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" id:\"4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee\" pid:6883 exit_status:1 exited_at:{seconds:1752135198 nanos:426913071}" Jul 10 08:13:29.984705 sshd[7406]: Accepted publickey for core from 172.24.4.1 port 52256 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:13:29.986260 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.637440 1500 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.637943 1500 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.638826 1500 omaha_request_action.cc:272] Request: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.638882 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.848347 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.850916 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 08:13:29.987146 update_engine[1500]: E20250710 08:13:29.856374 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 08:13:29.987146 update_engine[1500]: I20250710 08:13:29.856587 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 08:13:29.989871 kubelet[2824]: E0710 08:13:25.475604 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:13:29.989871 kubelet[2824]: I0710 08:13:25.475755 2824 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 10 08:13:18.286737 systemd[1]: cri-containerd-4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee.scope: Consumed 2.727s CPU time, 21.5M memory peak, 856K read from disk. Jul 10 08:13:29.977678 sshd-session[7406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:13:18.489187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395-rootfs.mount: Deactivated successfully. Jul 10 08:13:18.670742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee-rootfs.mount: Deactivated successfully. Jul 10 08:13:30.005234 systemd-logind[1499]: New session 31 of user core. Jul 10 08:13:30.016457 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 10 08:13:30.481179 containerd[1541]: time="2025-07-10T08:13:30.314345230Z" level=warning msg="container event discarded" container=9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a type=CONTAINER_CREATED_EVENT Jul 10 08:13:30.602317 containerd[1541]: time="2025-07-10T08:13:30.602167390Z" level=warning msg="container event discarded" container=9ea2c95dbc7e2567d327e69aedb6a18a9f5c1ad0e0cba82eda2d7698be3a714a type=CONTAINER_STARTED_EVENT Jul 10 08:13:30.953444 containerd[1541]: time="2025-07-10T08:13:30.953146224Z" level=warning msg="container event discarded" container=15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24 type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.050370119Z" level=warning msg="container event discarded" container=b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.050441113Z" level=warning msg="container event discarded" container=b98201c0db8043cf23b17d45dcc0380cb6da782da197223ebfbdbb12e241eefd type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.094739976Z" level=warning msg="container event discarded" container=0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.107016503Z" level=warning msg="container event discarded" container=15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24 type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.281465142Z" level=warning msg="container event discarded" container=0ee1f6ebfdf6773d060fce0b44b2660a2ba4b626a0b2d344455fdd8ac59539ea type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.390931302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"f3996bbfe450137d646994ee4f6036241013929964346a9e9b2eb846d8a4974a\" pid:7585 exited_at:{seconds:1752135211 nanos:390021749}" Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.513161693Z" level=warning msg="container event discarded" container=1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736 type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.600536833Z" level=error msg="get state for 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" error="context deadline exceeded" Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.600617175Z" level=warning msg="unknown status" status=0 Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:31.759748354Z" level=warning msg="container event discarded" container=1edf7e81d0cae5c07b5139995ef7f9ad1f3561290ed87996d536894eca17b736 type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:32.083515958Z" level=warning msg="container event discarded" container=c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7 type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:32.083631827Z" level=warning msg="container event discarded" container=c511f5724ea574a65642ad83358f8164a8404c9094121c5af8b34af8479073c7 type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:33.252920439Z" level=warning msg="container event discarded" container=d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1 type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:33.253065103Z" level=warning msg="container event discarded" container=d4b1172399477b0c9a116b4aa95884682b7ceaff47000338488984b368dfdda1 type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:33.604856280Z" level=error msg="get state for 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" error="context deadline exceeded" Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:33.604922395Z" level=warning msg="unknown status" status=0 Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:34.589663366Z" level=warning msg="container event discarded" container=3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72 type=CONTAINER_CREATED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:34.812218487Z" level=warning msg="container event discarded" container=3891b9a5f162dc519b528ba2388a7c42d9a4d88e17748b9239b8ab89d9122a72 type=CONTAINER_STARTED_EVENT Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:35.609524124Z" level=error msg="get state for 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" error="context deadline exceeded" Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:35.609608805Z" level=warning msg="unknown status" status=0 Jul 10 08:13:38.680630 containerd[1541]: time="2025-07-10T08:13:35.924990553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"76589d56e447eac720a1bf500dfc7a27babac8cad6aea910e796b232a7641870\" pid:7610 exited_at:{seconds:1752135215 nanos:924451141}" Jul 10 08:13:38.684036 kubelet[2824]: E0710 08:13:34.476167 2824 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal.1850d57073459871 kube-system 1075 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-ci-4391-0-0-n-29a01ddc69.novalocal,UID:38962031e0206f3ff0de22fa27483fe0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:ci-4391-0-0-n-29a01ddc69.novalocal,},FirstTimestamp:2025-07-10 08:08:53 +0000 UTC,LastTimestamp:2025-07-10 08:13:00.461759569 +0000 UTC m=+335.313044973,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4391-0-0-n-29a01ddc69.novalocal,}" Jul 10 08:13:38.684036 kubelet[2824]: E0710 08:13:37.445027 2824 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 10 08:13:38.684036 kubelet[2824]: I0710 08:13:37.466106 2824 status_manager.go:914] "Failed to update status for pod" pod="kube-system/kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f702a133-e30e-41d6-b5d1-eff670a3bc7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:12:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-10T08:12:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://212c04093bf77fd10374a61fe14da2678d3192e48d7242d1c30fe4f9483256d2\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.32.6\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-07-10T08:07:18Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-ci-4391-0-0-n-29a01ddc69.novalocal\": etcdserver: request timed out" Jul 10 08:13:39.100578 kubelet[2824]: E0710 08:13:39.100252 2824 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4391-0-0-n-29a01ddc69.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 10 08:13:39.586636 containerd[1541]: time="2025-07-10T08:13:39.586057286Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Jul 10 08:13:39.587822 containerd[1541]: time="2025-07-10T08:13:39.587530848Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Jul 10 08:13:39.588224 containerd[1541]: time="2025-07-10T08:13:39.588181320Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Jul 10 08:13:39.588495 containerd[1541]: time="2025-07-10T08:13:39.588427798Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Jul 10 08:13:39.589236 containerd[1541]: time="2025-07-10T08:13:39.588999000Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Jul 10 08:13:39.589588 containerd[1541]: time="2025-07-10T08:13:39.589519206Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Jul 10 08:13:39.591185 containerd[1541]: time="2025-07-10T08:13:39.591133475Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Jul 10 08:13:39.591456 containerd[1541]: time="2025-07-10T08:13:39.591371986Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Jul 10 08:13:39.593796 containerd[1541]: time="2025-07-10T08:13:39.593721930Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Jul 10 08:13:39.595820 containerd[1541]: time="2025-07-10T08:13:39.595740966Z" level=info msg="Ensure that container 4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee in task-service has been cleanup successfully" Jul 10 08:13:39.636410 containerd[1541]: time="2025-07-10T08:13:39.636276892Z" level=info msg="TaskExit event container_id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" id:\"fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395\" pid:7099 exit_status:1 exited_at:{seconds:1752135174 nanos:190134167}" Jul 10 08:13:39.709030 sshd[7563]: Connection closed by 172.24.4.1 port 52256 Jul 10 08:13:39.709945 sshd-session[7406]: pam_unix(sshd:session): session closed for user core Jul 10 08:13:39.716379 systemd-logind[1499]: Session 31 logged out. Waiting for processes to exit. Jul 10 08:13:39.717484 systemd[1]: sshd@28-172.24.4.5:22-172.24.4.1:52256.service: Deactivated successfully. Jul 10 08:13:39.724359 systemd[1]: session-31.scope: Deactivated successfully. Jul 10 08:13:39.730012 systemd-logind[1499]: Removed session 31. Jul 10 08:13:39.743134 update_engine[1500]: I20250710 08:13:39.743018 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 08:13:39.743913 update_engine[1500]: I20250710 08:13:39.743294 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 08:13:39.743913 update_engine[1500]: I20250710 08:13:39.743597 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 08:13:39.749026 update_engine[1500]: E20250710 08:13:39.748972 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 08:13:39.749171 update_engine[1500]: I20250710 08:13:39.749046 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 08:13:39.957621 kubelet[2824]: I0710 08:13:39.957521 2824 scope.go:117] "RemoveContainer" containerID="acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5" Jul 10 08:13:39.960203 kubelet[2824]: I0710 08:13:39.959065 2824 scope.go:117] "RemoveContainer" containerID="4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" Jul 10 08:13:39.960203 kubelet[2824]: E0710 08:13:39.959502 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:13:39.980803 kubelet[2824]: I0710 08:13:39.980726 2824 scope.go:117] "RemoveContainer" containerID="fef813dd85ed9fe586660e8ae962e7b6eb8d4f080de76f289c67b2cc81aaa395" Jul 10 08:13:39.982803 containerd[1541]: time="2025-07-10T08:13:39.982579927Z" level=info msg="RemoveContainer for \"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\"" Jul 10 08:13:40.001421 containerd[1541]: time="2025-07-10T08:13:40.001324087Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Jul 10 08:13:40.058683 containerd[1541]: time="2025-07-10T08:13:40.058619877Z" level=info msg="RemoveContainer for \"acc8c7c172b5c19788b4b7aea6063da3d0999c768562a7a13115e6f5f3a53cd5\" returns successfully" Jul 10 08:13:40.059763 kubelet[2824]: I0710 08:13:40.059675 2824 scope.go:117] "RemoveContainer" containerID="677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c" Jul 10 08:13:40.064938 containerd[1541]: time="2025-07-10T08:13:40.064797664Z" level=info msg="RemoveContainer for \"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\"" Jul 10 08:13:40.185822 containerd[1541]: time="2025-07-10T08:13:40.185541852Z" level=info msg="Container c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:13:40.231887 containerd[1541]: time="2025-07-10T08:13:40.231495618Z" level=info msg="RemoveContainer for \"677459c9c574d506cf08b13c9cf0a9273a4b94b9ec7764339e8b2f0beb6c387c\" returns successfully" Jul 10 08:13:40.387367 containerd[1541]: time="2025-07-10T08:13:40.387244221Z" level=info msg="CreateContainer within sandbox \"f42b79f77704a99f978aaf4a6f08c28ca92ac2d532a0ba009eb686dcd899def2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6\"" Jul 10 08:13:40.388980 containerd[1541]: time="2025-07-10T08:13:40.388817603Z" level=info msg="StartContainer for \"c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6\"" Jul 10 08:13:40.393387 containerd[1541]: time="2025-07-10T08:13:40.393166616Z" level=info msg="connecting to shim c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6" address="unix:///run/containerd/s/43474bc45c8b9396187ac29754ef5b498c52f78fc73669c2a63aa40d005548c4" protocol=ttrpc version=3 Jul 10 08:13:40.436206 systemd[1]: Started cri-containerd-c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6.scope - libcontainer container c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6. Jul 10 08:13:40.600524 containerd[1541]: time="2025-07-10T08:13:40.600282433Z" level=info msg="StartContainer for \"c301c1b348ba8c20f929dc261366955e117b54f4160f07afab07ab77b73114b6\" returns successfully" Jul 10 08:13:40.990208 kubelet[2824]: I0710 08:13:40.989542 2824 scope.go:117] "RemoveContainer" containerID="4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" Jul 10 08:13:40.992592 kubelet[2824]: E0710 08:13:40.992550 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:13:44.765709 systemd[1]: Started sshd@29-172.24.4.5:22-172.24.4.1:40282.service - OpenSSH per-connection server daemon (172.24.4.1:40282). Jul 10 08:13:45.798041 containerd[1541]: time="2025-07-10T08:13:45.797422191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"39145da1fdb6a7bc00a8500d7f3c212353d256e95090ce15ac534115e2a75129\" pid:7690 exited_at:{seconds:1752135225 nanos:795378107}" Jul 10 08:13:46.798698 sshd[7675]: Accepted publickey for core from 172.24.4.1 port 40282 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:13:46.803904 sshd-session[7675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:13:46.837101 systemd-logind[1499]: New session 32 of user core. Jul 10 08:13:46.843311 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 10 08:13:47.662219 sshd[7700]: Connection closed by 172.24.4.1 port 40282 Jul 10 08:13:47.663478 sshd-session[7675]: pam_unix(sshd:session): session closed for user core Jul 10 08:13:47.672441 systemd[1]: sshd@29-172.24.4.5:22-172.24.4.1:40282.service: Deactivated successfully. Jul 10 08:13:47.672638 systemd-logind[1499]: Session 32 logged out. Waiting for processes to exit. Jul 10 08:13:47.678166 systemd[1]: session-32.scope: Deactivated successfully. Jul 10 08:13:47.681869 systemd-logind[1499]: Removed session 32. Jul 10 08:13:49.746151 update_engine[1500]: I20250710 08:13:49.745131 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 08:13:49.746151 update_engine[1500]: I20250710 08:13:49.745899 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 08:13:49.749774 update_engine[1500]: I20250710 08:13:49.749588 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 08:13:49.754988 update_engine[1500]: E20250710 08:13:49.754894 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 08:13:49.755292 update_engine[1500]: I20250710 08:13:49.755266 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 08:13:50.706667 containerd[1541]: time="2025-07-10T08:13:50.706211560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"e4331d6a4c719c854fbafd14ed8f564a6ca95e63b716640bd40fc74e480bdf1b\" pid:7724 exited_at:{seconds:1752135230 nanos:705715268}" Jul 10 08:13:52.686518 systemd[1]: Started sshd@30-172.24.4.5:22-172.24.4.1:40290.service - OpenSSH per-connection server daemon (172.24.4.1:40290). Jul 10 08:13:54.045986 sshd[7735]: Accepted publickey for core from 172.24.4.1 port 40290 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:13:54.048630 sshd-session[7735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:13:54.064243 systemd-logind[1499]: New session 33 of user core. Jul 10 08:13:54.068203 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 10 08:13:54.909209 sshd[7738]: Connection closed by 172.24.4.1 port 40290 Jul 10 08:13:54.909084 sshd-session[7735]: pam_unix(sshd:session): session closed for user core Jul 10 08:13:54.917424 systemd-logind[1499]: Session 33 logged out. Waiting for processes to exit. Jul 10 08:13:54.918295 systemd[1]: sshd@30-172.24.4.5:22-172.24.4.1:40290.service: Deactivated successfully. Jul 10 08:13:54.925274 systemd[1]: session-33.scope: Deactivated successfully. Jul 10 08:13:54.928600 systemd-logind[1499]: Removed session 33. Jul 10 08:13:56.410405 kubelet[2824]: I0710 08:13:56.410310 2824 scope.go:117] "RemoveContainer" containerID="4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" Jul 10 08:13:56.412902 kubelet[2824]: E0710 08:13:56.412157 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal_kube-system(8e6a146caca41331ef6aa6523967fb66)\"" pod="kube-system/kube-scheduler-ci-4391-0-0-n-29a01ddc69.novalocal" podUID="8e6a146caca41331ef6aa6523967fb66" Jul 10 08:13:59.749104 update_engine[1500]: I20250710 08:13:59.747357 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 08:13:59.749104 update_engine[1500]: I20250710 08:13:59.748239 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 08:13:59.749104 update_engine[1500]: I20250710 08:13:59.748804 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 08:13:59.755557 update_engine[1500]: E20250710 08:13:59.753931 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754074 1500 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754096 1500 omaha_request_action.cc:617] Omaha request response: Jul 10 08:13:59.755557 update_engine[1500]: E20250710 08:13:59.754405 1500 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754678 1500 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754686 1500 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754695 1500 update_attempter.cc:306] Processing Done. Jul 10 08:13:59.755557 update_engine[1500]: E20250710 08:13:59.754755 1500 update_attempter.cc:619] Update failed. Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754768 1500 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754779 1500 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.754786 1500 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.755081 1500 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.755159 1500 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 08:13:59.755557 update_engine[1500]: I20250710 08:13:59.755167 1500 omaha_request_action.cc:272] Request: Jul 10 08:13:59.755557 update_engine[1500]: Jul 10 08:13:59.755557 update_engine[1500]: Jul 10 08:13:59.756183 update_engine[1500]: Jul 10 08:13:59.756183 update_engine[1500]: Jul 10 08:13:59.756183 update_engine[1500]: Jul 10 08:13:59.756183 update_engine[1500]: Jul 10 08:13:59.756183 update_engine[1500]: I20250710 08:13:59.755179 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 08:13:59.756183 update_engine[1500]: I20250710 08:13:59.755326 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 08:13:59.756183 update_engine[1500]: I20250710 08:13:59.755520 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 08:13:59.760068 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 08:13:59.761302 update_engine[1500]: E20250710 08:13:59.760924 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761029 1500 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761042 1500 omaha_request_action.cc:617] Omaha request response: Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761049 1500 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761055 1500 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761059 1500 update_attempter.cc:306] Processing Done. Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761066 1500 update_attempter.cc:310] Error event sent. Jul 10 08:13:59.761302 update_engine[1500]: I20250710 08:13:59.761087 1500 update_check_scheduler.cc:74] Next update check in 40m12s Jul 10 08:13:59.762873 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 08:13:59.930260 systemd[1]: Started sshd@31-172.24.4.5:22-172.24.4.1:56260.service - OpenSSH per-connection server daemon (172.24.4.1:56260). Jul 10 08:14:01.171602 sshd[7750]: Accepted publickey for core from 172.24.4.1 port 56260 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:14:01.175493 sshd-session[7750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:14:01.183516 systemd-logind[1499]: New session 34 of user core. Jul 10 08:14:01.192230 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 10 08:14:01.906686 sshd[7755]: Connection closed by 172.24.4.1 port 56260 Jul 10 08:14:01.907103 sshd-session[7750]: pam_unix(sshd:session): session closed for user core Jul 10 08:14:01.914191 systemd-logind[1499]: Session 34 logged out. Waiting for processes to exit. Jul 10 08:14:01.915911 systemd[1]: sshd@31-172.24.4.5:22-172.24.4.1:56260.service: Deactivated successfully. Jul 10 08:14:01.922450 systemd[1]: session-34.scope: Deactivated successfully. Jul 10 08:14:01.924260 systemd-logind[1499]: Removed session 34. Jul 10 08:14:06.268575 containerd[1541]: time="2025-07-10T08:14:06.268444265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88f1ac5ff1454222c08bd22caa0f4fc8adf44e93c2488aa119f523dcbab21310\" id:\"26fabe2df4526c0d591a795b4954a3c932398c73abd31118e95e8d979658ecc0\" pid:7786 exited_at:{seconds:1752135246 nanos:267231305}" Jul 10 08:14:06.922619 systemd[1]: Started sshd@32-172.24.4.5:22-172.24.4.1:45416.service - OpenSSH per-connection server daemon (172.24.4.1:45416). Jul 10 08:14:07.413699 kubelet[2824]: I0710 08:14:07.413038 2824 scope.go:117] "RemoveContainer" containerID="4a398c88bdcb2deee53d71bc9f66a535719fa25f00f6fc5f13e3c6f142cb17ee" Jul 10 08:14:07.424976 containerd[1541]: time="2025-07-10T08:14:07.424482208Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Jul 10 08:14:07.444301 containerd[1541]: time="2025-07-10T08:14:07.443380767Z" level=info msg="Container 3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475: CDI devices from CRI Config.CDIDevices: []" Jul 10 08:14:07.453934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479448585.mount: Deactivated successfully. Jul 10 08:14:07.462827 containerd[1541]: time="2025-07-10T08:14:07.462757213Z" level=info msg="CreateContainer within sandbox \"b3f94042d4bdb0254aafe8abfac01c5b5c963cbb44a5244334eb9404284dd8a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475\"" Jul 10 08:14:07.463903 containerd[1541]: time="2025-07-10T08:14:07.463876636Z" level=info msg="StartContainer for \"3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475\"" Jul 10 08:14:07.467476 containerd[1541]: time="2025-07-10T08:14:07.467438861Z" level=info msg="connecting to shim 3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475" address="unix:///run/containerd/s/30c72d7507561355487f6ee5d36c7fe4d7d1edc1dc1abfe41203881c95e15e70" protocol=ttrpc version=3 Jul 10 08:14:07.509157 systemd[1]: Started cri-containerd-3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475.scope - libcontainer container 3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475. Jul 10 08:14:07.614874 containerd[1541]: time="2025-07-10T08:14:07.614814568Z" level=info msg="StartContainer for \"3109b53e36473948955fdd530a7c66e00e6420f5c0f8d3459e3c4af8be98f475\" returns successfully" Jul 10 08:14:08.293376 sshd[7799]: Accepted publickey for core from 172.24.4.1 port 45416 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:14:08.297911 sshd-session[7799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:14:08.307015 systemd-logind[1499]: New session 35 of user core. Jul 10 08:14:08.314084 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 10 08:14:08.771641 containerd[1541]: time="2025-07-10T08:14:08.771504107Z" level=warning msg="container event discarded" container=06cab06a37d3226f0f861b2d180c43e41c6504d1ee53f998a6988460f915fb5c type=CONTAINER_STOPPED_EVENT Jul 10 08:14:08.771641 containerd[1541]: time="2025-07-10T08:14:08.771621118Z" level=warning msg="container event discarded" container=898485fa8ca3aad154e7c61a92cbbee54884545bcd87d8bf1cb66cb3790f6c31 type=CONTAINER_STOPPED_EVENT Jul 10 08:14:08.912960 containerd[1541]: time="2025-07-10T08:14:08.912825677Z" level=warning msg="container event discarded" container=493922f99b717e94028fcd0da40621a31228a275865111108ac864198da76e36 type=CONTAINER_STOPPED_EVENT Jul 10 08:14:09.106277 containerd[1541]: time="2025-07-10T08:14:09.106035305Z" level=warning msg="container event discarded" container=fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1 type=CONTAINER_CREATED_EVENT Jul 10 08:14:09.165837 sshd[7832]: Connection closed by 172.24.4.1 port 45416 Jul 10 08:14:09.166383 sshd-session[7799]: pam_unix(sshd:session): session closed for user core Jul 10 08:14:09.171522 systemd-logind[1499]: Session 35 logged out. Waiting for processes to exit. Jul 10 08:14:09.175170 systemd[1]: sshd@32-172.24.4.5:22-172.24.4.1:45416.service: Deactivated successfully. Jul 10 08:14:09.180287 systemd[1]: session-35.scope: Deactivated successfully. Jul 10 08:14:09.185924 systemd-logind[1499]: Removed session 35. Jul 10 08:14:09.390670 containerd[1541]: time="2025-07-10T08:14:09.390532424Z" level=warning msg="container event discarded" container=fc2c6b3a5b112aa3230b2179130f5553aaa37b26bd05509fa11bd92c74121df1 type=CONTAINER_STARTED_EVENT Jul 10 08:14:09.628322 containerd[1541]: time="2025-07-10T08:14:09.628249905Z" level=warning msg="container event discarded" container=d6e4d88eaa32d5e419f4bcf2f7ef79066681308df794e6dd66e3319b794ead02 type=CONTAINER_DELETED_EVENT Jul 10 08:14:09.893538 containerd[1541]: time="2025-07-10T08:14:09.893445838Z" level=warning msg="container event discarded" container=29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c type=CONTAINER_CREATED_EVENT Jul 10 08:14:09.912287 containerd[1541]: time="2025-07-10T08:14:09.912194545Z" level=warning msg="container event discarded" container=e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2 type=CONTAINER_CREATED_EVENT Jul 10 08:14:09.969674 containerd[1541]: time="2025-07-10T08:14:09.969559187Z" level=warning msg="container event discarded" container=ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec type=CONTAINER_CREATED_EVENT Jul 10 08:14:10.154383 containerd[1541]: time="2025-07-10T08:14:10.153999263Z" level=warning msg="container event discarded" container=e08bb40de573b59c26d3974ec83a0d676c8a3918a5105f2de2a4a27a6cdd3aa2 type=CONTAINER_STARTED_EVENT Jul 10 08:14:10.185471 containerd[1541]: time="2025-07-10T08:14:10.185387582Z" level=warning msg="container event discarded" container=29c543e6ef704dc3059d37bbcba5e42ebc60513aa5925aa4f969353801a9bf7c type=CONTAINER_STARTED_EVENT Jul 10 08:14:10.200733 containerd[1541]: time="2025-07-10T08:14:10.200652523Z" level=warning msg="container event discarded" container=ad5cb69d9775d7d029ab351caa5fa9577ced1890bf3c799a26d74e3e096edaec type=CONTAINER_STARTED_EVENT Jul 10 08:14:14.203110 systemd[1]: Started sshd@33-172.24.4.5:22-172.24.4.1:46870.service - OpenSSH per-connection server daemon (172.24.4.1:46870). Jul 10 08:14:14.534870 containerd[1541]: time="2025-07-10T08:14:14.534185744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"abf92dabd6a9eee58d82805683714591bfce9d612871f1abba2f3e781dcc047a\" pid:7861 exited_at:{seconds:1752135254 nanos:532705497}" Jul 10 08:14:14.871190 containerd[1541]: time="2025-07-10T08:14:14.870110176Z" level=warning msg="container event discarded" container=c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8 type=CONTAINER_CREATED_EVENT Jul 10 08:14:15.006775 containerd[1541]: time="2025-07-10T08:14:15.006624772Z" level=warning msg="container event discarded" container=c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8 type=CONTAINER_STARTED_EVENT Jul 10 08:14:15.507987 sshd[7846]: Accepted publickey for core from 172.24.4.1 port 46870 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:14:15.511340 sshd-session[7846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:14:15.526518 systemd-logind[1499]: New session 36 of user core. Jul 10 08:14:15.535335 systemd[1]: Started session-36.scope - Session 36 of User core. Jul 10 08:14:15.702629 containerd[1541]: time="2025-07-10T08:14:15.702555479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"35fb6e869180a6e706fc1feac8c9201bd031ed26e908a8a6d3d487f9ffdff068\" pid:7883 exited_at:{seconds:1752135255 nanos:702176931}" Jul 10 08:14:16.243058 sshd[7870]: Connection closed by 172.24.4.1 port 46870 Jul 10 08:14:16.242540 sshd-session[7846]: pam_unix(sshd:session): session closed for user core Jul 10 08:14:16.247044 systemd[1]: sshd@33-172.24.4.5:22-172.24.4.1:46870.service: Deactivated successfully. Jul 10 08:14:16.253768 systemd[1]: session-36.scope: Deactivated successfully. Jul 10 08:14:16.257813 systemd-logind[1499]: Session 36 logged out. Waiting for processes to exit. Jul 10 08:14:16.260045 systemd-logind[1499]: Removed session 36. Jul 10 08:14:20.259052 containerd[1541]: time="2025-07-10T08:14:20.258869968Z" level=warning msg="container event discarded" container=a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6 type=CONTAINER_CREATED_EVENT Jul 10 08:14:20.400178 containerd[1541]: time="2025-07-10T08:14:20.400057606Z" level=warning msg="container event discarded" container=a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6 type=CONTAINER_STARTED_EVENT Jul 10 08:14:20.654440 containerd[1541]: time="2025-07-10T08:14:20.654127659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7ec666c66e59309029e370b023a97eb9ea4333ca2edfcdbe395df8c054d2dd6\" id:\"60dc3eea0c79cd29d066bb8e0aefd7be7a3d119168692434228f5f4b94270ff7\" pid:7916 exited_at:{seconds:1752135260 nanos:653370233}" Jul 10 08:14:21.257059 systemd[1]: Started sshd@34-172.24.4.5:22-172.24.4.1:46876.service - OpenSSH per-connection server daemon (172.24.4.1:46876). Jul 10 08:14:22.395424 sshd[7927]: Accepted publickey for core from 172.24.4.1 port 46876 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:14:22.397989 sshd-session[7927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:14:22.406278 systemd-logind[1499]: New session 37 of user core. Jul 10 08:14:22.412108 systemd[1]: Started session-37.scope - Session 37 of User core. Jul 10 08:14:23.473975 sshd[7930]: Connection closed by 172.24.4.1 port 46876 Jul 10 08:14:23.473245 sshd-session[7927]: pam_unix(sshd:session): session closed for user core Jul 10 08:14:23.479540 systemd-logind[1499]: Session 37 logged out. Waiting for processes to exit. Jul 10 08:14:23.480440 systemd[1]: sshd@34-172.24.4.5:22-172.24.4.1:46876.service: Deactivated successfully. Jul 10 08:14:23.486034 systemd[1]: session-37.scope: Deactivated successfully. Jul 10 08:14:23.489722 systemd-logind[1499]: Removed session 37. Jul 10 08:14:23.642186 containerd[1541]: time="2025-07-10T08:14:23.641899017Z" level=warning msg="container event discarded" container=190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a type=CONTAINER_CREATED_EVENT Jul 10 08:14:23.762566 containerd[1541]: time="2025-07-10T08:14:23.762279528Z" level=warning msg="container event discarded" container=190794898bfa9bffc48fff4e7e804d38fd228a12f8124979f6edaaa4e8e6493a type=CONTAINER_STARTED_EVENT Jul 10 08:14:28.497212 systemd[1]: Started sshd@35-172.24.4.5:22-172.24.4.1:58274.service - OpenSSH per-connection server daemon (172.24.4.1:58274). Jul 10 08:14:29.499008 sshd[7943]: Accepted publickey for core from 172.24.4.1 port 58274 ssh2: RSA SHA256:yo0hjAMpvM67ydt46tp4WTDLnbSbR4zBlfoIz0/m8MM Jul 10 08:14:29.500579 sshd-session[7943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 08:14:29.507436 systemd-logind[1499]: New session 38 of user core. Jul 10 08:14:29.514406 systemd[1]: Started session-38.scope - Session 38 of User core. Jul 10 08:14:30.329719 sshd[7946]: Connection closed by 172.24.4.1 port 58274 Jul 10 08:14:30.330710 sshd-session[7943]: pam_unix(sshd:session): session closed for user core Jul 10 08:14:30.337558 systemd[1]: sshd@35-172.24.4.5:22-172.24.4.1:58274.service: Deactivated successfully. Jul 10 08:14:30.341520 systemd[1]: session-38.scope: Deactivated successfully. Jul 10 08:14:30.345580 systemd-logind[1499]: Session 38 logged out. Waiting for processes to exit. Jul 10 08:14:30.348372 systemd-logind[1499]: Removed session 38. Jul 10 08:14:31.096354 containerd[1541]: time="2025-07-10T08:14:31.096248002Z" level=warning msg="container event discarded" container=b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb type=CONTAINER_STOPPED_EVENT Jul 10 08:14:31.159087 containerd[1541]: time="2025-07-10T08:14:31.159001179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c431297203427e8cc3f0c8666306b1d46b166642278610f1119fcacdd34895b8\" id:\"0e3f7c89f57fe711fd35b3d589239fe58edb0fae42c9f146ee8089f91f355ca7\" pid:7972 exited_at:{seconds:1752135271 nanos:158537569}" Jul 10 08:14:31.318651 containerd[1541]: time="2025-07-10T08:14:31.318547013Z" level=warning msg="container event discarded" container=7c961b82d5e837b7edff5e99ae403840c7ab786519eba07dc17b134c0c17a658 type=CONTAINER_STOPPED_EVENT Jul 10 08:14:31.728633 containerd[1541]: time="2025-07-10T08:14:31.728560129Z" level=warning msg="container event discarded" container=b84083c38dd43a8cbd4b9cc2d38eb39575b30ab8fb723b920c9dd641750359fb type=CONTAINER_DELETED_EVENT Jul 10 08:14:31.767116 containerd[1541]: time="2025-07-10T08:14:31.767053129Z" level=warning msg="container event discarded" container=8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a type=CONTAINER_CREATED_EVENT Jul 10 08:14:31.767326 containerd[1541]: time="2025-07-10T08:14:31.767272454Z" level=warning msg="container event discarded" container=8400e15f584548c4fe04f2ed37570f76ef9ee24be322ba609c03c7a00178d63a type=CONTAINER_STARTED_EVENT Jul 10 08:14:31.850634 containerd[1541]: time="2025-07-10T08:14:31.850578093Z" level=warning msg="container event discarded" container=b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d type=CONTAINER_CREATED_EVENT Jul 10 08:14:32.030494 containerd[1541]: time="2025-07-10T08:14:32.030026579Z" level=warning msg="container event discarded" container=b3c101891313f249a8e5ea2d9656d25459aea01cf0678b94a2b2e440783ced2d type=CONTAINER_STARTED_EVENT Jul 10 08:14:35.575538 containerd[1541]: time="2025-07-10T08:14:35.575341965Z" level=warning msg="container event discarded" container=15be60bf3c3642c52feac0de2d763952dce2d60aff1ecac2354fb5e7cb534e24 type=CONTAINER_STOPPED_EVENT Jul 10 08:14:35.755803 containerd[1541]: time="2025-07-10T08:14:35.755687850Z" level=warning msg="container event discarded" container=1b27c4ed32086bdd2ee9d8a49565fbe7cfdc6175ccb7e2afcd85235f153532ff type=CONTAINER_STOPPED_EVENT